Search Results

Search found 37183 results on 1488 pages for 'string conversion'.

Page 410/1488 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • What should a domain object's validation cover?

    - by MarcoR88
    I'm trying to figure out how to do validation of domain objects that need external resources, such as data mappers/dao Firstly here's my code class User { const INVALID_ID = 1; const INVALID_NAME = 2; const INVALID_EMAIL = 4; int getID(); void setID(Int i); string getName(); void setName(String s); string getEmail(); void setEmail(String s); int getErrorsForInsert(); // returns a bitmask for INVALID_* constants int getErrorsForUpdate(); } My worries are about the uniqueness of the email, checking it would require the storage layer. Reading others' code seems that two solutions are equally accepted: both perform the unique validation in data mapper but some set an error state to the DO user.addError(User.INVALID_EMAIL) while others prefer to throw a totally different type of exception that covers only persistence, like: UserStorageException { const INVALID_EMAIL = 1; const INVALID_CITY = 2; } What are the pros and cons of these solutions?

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • Question about a simple design problem

    - by Uri
    At work I stumbled uppon a method. It made a query, and returned a String based on the result of the query, such as de ID of a customer. If the query didn't return a single customer, it'd return a null. Otherwise, it'd return a String with the ID's of them. It looked like this: String error = getOwners(); if (error != null) { throw new Exception("Can't delete, the flat is owned by: " + error); } ... Ignoring the fact that getCustomers() returns a null when it should instead return an empty String, two things are happening here. It checks if the flat is owned by someone, and then returns them. I think a more readable logic would be to do this: if (isOwned) { throw new Exception("Can't delete, the flat is owned by: " + getOwners()); } ... The problem is that the first way does with one query what I do with two queries to the database. What would be a good solution involving good design and efficiency for this?

    Read the article

  • Understanding Application binary interface (ABI)

    - by Tim
    I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: An ABI is a set of conventions that allows a linker to combine separately compiled modules into one unit without recompilation, such as calling conventions, machine interface, and operating-system interface. Among other things, an ABI defines the binary interface between these units. ... The benefits of conforming to an ABI are that it allows linking object files compiled by different compilers. From Wikipedia: an application binary interface (ABI) describes the low-level interface between an application (or any type of) program and the operating system or another application. ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? Is ABI specified for/in machine code, Assembly language, and/or of C?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • Google Analytics: How long does it take users to trigger an event

    - by Stephen Ostermiller
    I implemented Google Analytics event tracking on my currency conversion website. The typical user flow is: User lands on a page about two currencies. User enters an amount to be converted. The site shows the user the value in the other currency. The JavaScript sends Google Analytics an "converted" event when the currency conversion is done. Because most of the sessions on my site are single page, the event tracking is very important to me to be able to know if users find my page useful. I'm looking for a way to be able to figure out how long it typically takes users to enter a value in the form. I expect that this data would form a bell curve with around a specific amount of time after page load. If I can't get a graph, I could make do with a median value. I would like to be able to use this as a core metric around usability testing. Is there a way to get this information out of Google Analytics?

    Read the article

  • Unnamed Refactoring

    - by Liam McLennan
    This post is a message in a bottle. It cast it into the sea in the hope that it will one day return to me, stuffed to the cork with enlightenment. Yesterday I  tweeted, what is the name of the pattern where you replace a multi-way conditional with an associative array? I said ‘pattern’ but I meant ‘refactoring’. Anyway, no one replied so I will describe the refactoring here. Programmers tend to think imperatively, which leads to code such as: public int GetPopulation(string country) { if (country == "Australia") { return 22360793; } else if (country == "China") { return 1324655000; } else if (country == "Switzerland") { return 7782900; } else { throw new Exception("What ain't no country I ever heard of. They speak English in what?"); } } which is horrid. We can write a cleaner version, replacing the multi-way conditional with an associative array, treating the conditional as data: public int GetPopulation(string country) { if (!Populations.ContainsKey(country)) throw new Exception("The population of " + country + " could not be found."); return Populations[country]; } private Dictionary<string, int> Populations { get { return new Dictionary<string, int> { {"Australia", 22360793}, {"China", 1324655000}, {"Switzerland", 7782900} }; } } Does this refactoring already have a name? Otherwise, I propose Replace multi-way conditional with associative array

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • preseeded installation keeps asking for confirmation while creating RAID-Partitions on certain hardware-platform

    - by Marc Shennon
    I am aware of the partman-md/confirm_nooverwrite thing, that was the solution to most of this problems in the past. The thing is, that the preseed-file works for almost all hardware-platforms I tested, but only for one (Primergy MX130) it keeps asking for confirmation, before writing the partition-layout to the disks. All machines I tested are running with two SATA Disks, nothing special. I'm not really sure, what information could be needed in order to investigate the cause of this behaviour, but I would of course be willing to provide more information, if someone has an idea. Relevant part of the preseed file is the following: d-i partman-auto/disk string /dev/sda /dev/sdb d-i partman-auto/method string raid d-i partman-md/confirm boolean true d-i partman-partitioning/confirm_write_new_label boolean true d-i partman-md/device_remove_md boolean true d-i partman/choose_partition select finish d-i partman-md/confirm_nooverwrite boolean true # Write the changes to disks? d-i partman/confirm boolean true d-i mdadm/boot_degraded boolean true # RECIPE # Next you need to specify the physical partitions that will be used. d-i partman-auto/expert_recipe string \ multiraid :: \ 500 10000 1000000000 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 512 1000 786 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 8192 10240 10240 raid $lvmignore{ }\ method{ raid } \ . # Parameters are: # <raidtype> <devcount> <sparecount> <fstype> <mountpoint> <devices> <sparedevices> d-i partman-auto-raid/recipe string \ 1 2 0 ext4 / /dev/sda1#/dev/sdb1 . \ 1 2 0 ext2 /boot /dev/sda2#/dev/sdb2 . \ 1 2 0 swap - /dev/sda5#/dev/sdb5 .

    Read the article

  • Gathering IP address and workstation information; does it belong in a state class?

    - by p.campbell
    I'm writing an enterprisey utility that collects exception information and writes to the Windows Event Log, sends an email, etc. This utility class will be used by all applications in the corporation: web, BizTalk, Windows Services, etc. Currently this class: holds state given to it via public properties calls out to .NET Framework methods to gather information about runtime details. Included are call to various properties and methods from System.Environment, Reflection details, etc. This implementation has the benefit of allowing all those callers not to have to make these same calls themselves. This means less code for the caller to forget, screw up, etc. Should this state class (please what's the phrase I'm looking for [like DTO]?) know how to resolve/determine runtime details (like the IP address and machine name that it's running on)? It seems to me on second thought that it's meant to be a class that should hold state, and not know how to call out to the .NET Framework to find information. var myEx = new AppProblem{MachineName="Riker"}; //Will get "Riker 10.0.0.1" from property MachineLongDesc Console.WriteLine("full machine details: " + myEx.MachineLongDesc); public class AppProblem { public string MachineName{get;set;} public string MachineLongDesc{ get{ if(string.IsNullOrEmpty(this.MachineName) { this.MachineName = Environment.MachineName; } return this.MachineName + " " + GetCurrentIP(); } } private string GetCurrentIP() { return System.Net.Dns.GetHostEntry(this.MachineName) .AddressList.First().ToString(); } } This code was written by hand from memory, and presented for simplicity, trying to illustrate the concept.

    Read the article

  • GAPI output doesn't match Google Analytics website

    - by Yekver
    I have to get the main info about my Google Analytics Goals. I'm using GAPI lib, with this code: <?php require_once 'conf.inc'; require_once 'gapi.class.php'; $ga = new gapi(ga_email,ga_password); $dimensions = array('pagePath', 'hostname'); $metrics = array('goalCompletionsAll', 'goalConversionRateAll', 'goalValueAll'); $ga->requestReportData(ga_profile_id, $dimensions, $metrics, '-goalCompletionsAll', '', '2012-09-07', '2012-10-07', 1, 500); $gaResults = $ga->getResults(); foreach($gaResults as $result) { var_dump($result); } cut this code is output: object(gapiReportEntry)[7] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 12031 'goalConversionRateAll' => float 206.93154454764 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/catalogs.php' (length=13) 'hostname' => string 'www.example.com' (length=13) object(gapiReportEntry)[6] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 9744 'goalConversionRateAll' => float 661.05834464043 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/price.php' (length=10) 'hostname' => string 'www.example.com' (length=13) What I see on Google Analytics website on Goals URLs page with the same period of date is: Goal Completion Location Goal Completions Goal Value 1. /price.php 9,396 $0.00 2. /saloni.php 3,739 $0.00 As you can see outputs doesn't match. Why? What's wrong?

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Toolset agnostic build server and Silverlight projects

    - by Marko Apfel
    Problem Normally I try to have my continuous integration as most a possible toolset free to ensure that no local stuff could have an impact to my build. My Silverlight app references a special compile target in a folder outside my developer tree: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So I copied the stuff from this folder to a local one and changed the call to this target in my csproj: <Import Project="..\..\..\tools\WebApplications\Microsoft.WebApplication.targets" /> And now Visual Studio Conversion Wizard welcomes my with this: Solution Regardless of which line I write – this conversion comes back again and again, if the line has another form than <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So it seems that there is no simple way to change this behaviour. Workaraound I must accept, that this line must be in the csproj and to run the build the toolset must be copied to the build server at the correct location. So go to your development machine where Visual Studio is installed and copy the folder “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications” to your build server at the equivalent location.   Xmas wishes to Microsoft: Please provide technologies to let us developers bundle all needed stuff for a project in one developer tree. It should be possible that one checkout starts us up! No additional installations regardless whether it is a developing machine or dedicated build or continuous integration server. Silverlight is only one example, code analysis configurations could also be terrible and much more …

    Read the article

  • HTML Tidy in NetBeans IDE

    - by Geertjan
    First step in integrating HTML Tidy (via its JTidy implementation) into NetBeans IDE: The reason why I started doing this is because I want to integrate this into the pluggable analyzer functionality of NetBeans IDE that I recently blogged about, i.e., where the FindBugs functionality is found. So a logical first step is to get it working in an Action class, after which I can port it into the analyzer infrastructure: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.cookies.EditorCookie; import org.openide.cookies.LineCookie; import org.openide.loaders.DataObject; import org.openide.text.Line; import org.openide.text.Line.ShowOpenType; import org.openide.util.Exceptions; import org.openide.util.NbBundle.Messages; import org.openide.windows.IOProvider; import org.openide.windows.InputOutput; import org.openide.windows.OutputEvent; import org.openide.windows.OutputListener; import org.openide.windows.OutputWriter; import org.w3c.tidy.Tidy; @ActionID(     category = "Tools", id = "org.jtidy.TidyAction") @ActionRegistration(     displayName = "#CTL_TidyAction") @ActionReferences({     @ActionReference(path = "Loaders/text/html/Actions", position = 150),     @ActionReference(path = "Editors/text/html/Popup", position = 750) }) @Messages("CTL_TidyAction=Run HTML Tidy") public final class TidyAction implements ActionListener {     private final DataObject context;     private final OutputWriter writer;     private EditorCookie ec = null;     public TidyAction(DataObject context) {         this.context = context;         ec = context.getLookup().lookup(org.openide.cookies.EditorCookie.class);         InputOutput io = IOProvider.getDefault().getIO("HTML Tidy", false);         io.select();         writer = io.getOut();     }     @Override     public void actionPerformed(ActionEvent ev) {         Tidy tidy = new Tidy();         try {             writer.reset();             StringWriter stringWriter = new StringWriter();             PrintWriter errorWriter = new PrintWriter(stringWriter);             tidy.setErrout(errorWriter);             tidy.parse(context.getPrimaryFile().getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (final String string : split) {                 final int end = string.indexOf(" c");                 if (string.startsWith("line")) {                     writer.println(string, new OutputListener() {                         @Override                         public void outputLineAction(OutputEvent oe) {                             LineCookie lc = context.getLookup().lookup(LineCookie.class);                             int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                             Line line = lc.getLineSet().getOriginal(lineNumber - 1);                             line.show(ShowOpenType.OPEN, Line.ShowVisibilityType.FOCUS);                         }                         @Override                         public void outputLineSelected(OutputEvent oe) {}                         @Override                         public void outputLineCleared(OutputEvent oe) {}                     });                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }     } } The string parsing above is ugly but gets the job done for now. A problem integrating this into the pluggable analyzer functionality is the limitation of its scope. The analyzer lets you select one or more projects, or individual files, but not a folder. So it doesn't work on folders in the Favorites window, for example, which is where I'd like to apply HTML Tidy, across multiple folders via the analyzer functionality. That's a bit of a bummer that I'm hoping to get around somehow.

    Read the article

  • Are Vala and desktopcouch ready?

    - by pavolzetor
    Hi, I have started writting rss reader in Vala, but I don't know, what database system should I use, I cannot connect to couchdb and sqlite works fine, but I would like use couchdb because of ubuntu one. I have natty with latest updates public CouchDB.Session session; public CouchDB.Database db; public string feed_table = "feed"; public string item_table = "item"; public struct field { string name; string val; } // constructor public Database() { try { this.session = new CouchDB.Session(); } catch (Error e) { stderr.printf ("%s a\n", e.message); } try { this.db = new CouchDB.Database (this.session, "test"); } catch (Error e) { stderr.printf ("%s a\n", e.message); } try { this.session.get_database_info("test"); } catch (Error e) { stderr.printf ("%s aa\n", e.message); } try { var newdoc = new CouchDB.Document (); newdoc.set_boolean_field ("awesome", true); newdoc.set_string_field ("phone", "555-VALA"); newdoc.set_double_field ("pi", 3.14159); newdoc.set_int_field ("meaning_of_life", 42); this.db.put_document (newdoc); // store document } catch (Error e) { stderr.printf ("%s aaa\n", e.message); } reports $ ./xml_parser rss.xmlCannot connect to destination (127.0.0.1) aa Cannot connect to destination (127.0.0.1) aaa

    Read the article

  • Interfaces on an abstract class

    - by insta
    My coworker and I have different opinions on the relationship between base classes and interfaces. I'm of the belief that a class should not implement an interface unless that class can be used when an implementation of the interface is required. In other words, I like to see code like this: interface IFooWorker { void Work(); } abstract class BaseWorker { ... base class behaviors ... public abstract void Work() { } protected string CleanData(string data) { ... } } class DbWorker : BaseWorker, IFooWorker { public void Work() { Repository.AddCleanData(base.CleanData(UI.GetDirtyData())); } } The DbWorker is what gets the IFooWorker interface, because it is an instantiatable implementation of the interface. It completely fulfills the contract. My coworker prefers the nearly identical: interface IFooWorker { void Work(); } abstract class BaseWorker : IFooWorker { ... base class behaviors ... public abstract void Work() { } protected string CleanData(string data) { ... } } class DbWorker : BaseWorker { public void Work() { Repository.AddCleanData(base.CleanData(UI.GetDirtyData())); } } Where the base class gets the interface, and by virtue of this all inheritors of the base class are of that interface as well. This bugs me but I can't come up with concrete reasons why, outside of "the base class cannot stand on its own as an implementation of the interface". What are the pros & cons of his method vs. mine, and why should one be used over another?

    Read the article

  • How can I tell if I am overusing multi-threading?

    - by exhuma
    NOTE: This is a complete re-write of the question. The text before was way too lengthy and did not get to the point! If you're interested in the original question, you can look it up in the edit history. I currently feel like I am over-using multi-threading. I have 3 types of data, A, B and C. Each A can be converted to multiple Bs and each B can be converted to multiple Cs. I am only interested in treating Cs. I could write this fairly easily with a couple of conversion functions. But I caught myself implementing it with threads, three queues (queue_a, queue_b and queue_c). There are two threads doing the different conversions, and one worker: ConverterA reads from queue_a and writes to queue_b ConverterB reads from queue_b and writes to queue_c Worker handles each element from queue_c The conversions are fairly mundane, and I don't know if this model is too convoluted. But it seems extremely robust to me. Each "converter" can start working even before data has arrived on the queues, and at any time in the code I can just "submit" new As or Bs and it will trigger the conversion pipeline which in turn will trigger a job by the worker thread. Even the resulting code looks simpler. But I still am unsure if I am abusing threads for something simple.

    Read the article

  • Best Practice: What can be the hashCode() method implementation if custom field used in equals() method are null?

    - by goodspeed
    What is the best practice to return a value for hashCode() method if custom field used in equals are null ? I have a situation, where equals() override is implemented using custom fields. Usually it it is better to override hashCode() also using that custom fields used in equals(). But if all the custom fields used in equals() are null, then what would be the best implementation for hashCode()? Example: class Person { private String firstName; private String lastName; public String getFirstName() { return firstName; } public String getLastName() { return lastName; } @Override public boolean equals(Object object) { boolean result = false; if (object == null || object.getClass() != getClass()) { result = false; } else { Person person = (Person) object; if (this.firstName == person.getFirstName() && this.lastName == tiger.getLastName()) { result = true; } } return result; } @Override public int hashCode() { int hash = 3; if(this.firstName == null || this.lastName == null) { // <b>What is the best practice here, </b> // <b>is return super.hashCode() better ?</b> } hash = 7 * hash + this.firstName.hashCode(); hash = 7 * hash + this.lastName.hashCode(); return hash; } } is it required to check for null in hashCode() ? If yes, what should be returned if custom values are null ?

    Read the article

  • Convert collections of enums to collection of strings and vice versa

    - by Michael Freidgeim
    Recently I needed to convert collections of  strings, that represent enum names, to collection of enums, and opposite,  to convert collections of   enums  to collection of  strings. I didn’t find standard LINQ extensions.However, in our big collection of helper extensions I found what I needed - just with different names: /// <summary> /// Safe conversion, ignore any unexpected strings/// Consider to name as Convert extension /// </summary> /// <typeparam name="EnumType"></typeparam> /// <param name="stringsList"></param> /// <returns></returns> public static List<EnumType> StringsListAsEnumList<EnumType>(this List<string> stringsList) where EnumType : struct, IComparable, IConvertible, IFormattable     { List<EnumType> enumsList = new List<EnumType>(); foreach (string sProvider in stringsList)     {     EnumType provider;     if (EnumHelper.TryParse<EnumType>(sProvider, out provider))     {     enumsList.Add(provider);     }     }     return enumsList;     }/// <summary> /// Convert each element of collection to string /// </summary> /// <typeparam name="T"></typeparam> /// <param name="objects"></param> /// <returns></returns> public static IEnumerable<string> ToStrings<T>(this IEnumerable<T> objects) {//from http://www.c-sharpcorner.com/Blogs/997/using-linq-to-convert-an-array-from-one-type-to-another.aspx return objects.Select(en => en.ToString()); }

    Read the article

  • The better way to ask for input?

    - by Skippy
    I am wondering which is the best way to go with java code. I need to create a class with simple prompts for input.. I have tried using both classes and cannot work out the particular benefits for each. Is this because I am still in the early stages of programming or are there situations that will occur as it becomes more complex?? import java.util.Scanner; public class myClass { Scanner stdin = new Scanner(System.in); public String getInput(String prompt) { System.out.print(prompt); return stdin.nextLine(); } ... or import java.io.*; public class myClass { public static void main(String[] args) throws IOException { BufferedReader stdin = new BufferedReader (new InputStreamReader (System.in)); System.out.print("Input something: "); String name = stdin.readLine(); I know these examples are showing different methods within these classes, but thought this might serve well for the discussion. I'm really not sure which site is the best to ask this on.

    Read the article

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

  • Why is the remove function not working for hashmaps? [migrated]

    - by John Marston
    I am working on a simple project that obtains data from an input file, gets what it needs and prints it to a file. I am basically getting word frequency so each key is a string and the value is its frequency in the document. The problem however, is that I need to print out these values to a file in descending order of frequency. After making my hashmap, this is the part of my program that sorts it and writes it to a file. //Hashmap I create Map<String, Integer> map = new ConcurrentHashMap<String, Integer>(); //function to sort hashmap while (map.isEmpty() == false){ for (Entry<String, Integer> entry: map.entrySet()){ if (entry.getValue() > valueMax){ max = entry.getKey(); System.out.println("max: " + max); valueMax = entry.getValue(); System.out.println("value: " + valueMax); } } map.remove(max); out.write(max + "\t" + valueMax + "\n"); System.out.println(max + "\t" + valueMax); } When I run this i get: t 9 t 9 t 9 t 9 t 9 .... so it appears the remove function is not working as it keeps getting the same value. I'm thinking i have an issue with a scope rule or I just don't understand hashmaps very well. If anyone knows of a better way to sort a hashmap and print it, I would welcome a suggestion. thanks

    Read the article

  • [LINQ] Master&ndash;Detail Same Record (I)

    - by JTorrecilla
    PROBLEM Firstly, I am working on a project based on LINQ, EF, and C# with VS2010. The following Table shows what I have and what I want to show. Header C1 C2 C3 1 P1 01/01/2011 2 P2 01/02/2011 Details 1 1 D1 2 1 D2 3 1 D3 4 2 D1 5 2 D4 Expected Results 1 P1 01/01/2011 D1, D2, D3 2 P2 01/02/2011 D1,D4   IDEAS At the begin I got 3 possible ways: - Doing inside the DB:  It could be achieved from DB with a CURSOR in a Stored Procedure. - Doing from .NET with LOOPS. - Doing with LINQ (I love it!!) FIRST APROX Example with a simple CLASS with a LIST: With and Employee Class that acts as Header Table: 1: public class Employee 2: { 3: public Employee () { } 4: public Int32 ID { get; set; } 5: public String FirstName{ get; set; } 6: public String LastName{ get; set; } 7: public List<string> Numbers{ get; set; } // Acts as Details Table 8: } We can show all numbers contained by Employee: 1: List<Employee > listado = new List<Employee >(); 2: //Fill Listado 3: var query= from Employee em in listado 4: let Nums= string.Join(";", em.Numbers) 5: select new { 6: em.Name, 7: Nums 8: }; The “LET” operator allows us to host the results of “Join” of the Numbers List of the Employee Class. A little example. ASAP I will post the second part to achieve the same with Entity Framework. Best Regards

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >