Search Results

Search found 8638 results on 346 pages for 'vs'.

Page 87/346 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Calling VS 2008 refactoring methods through command line?

    - by Huck
    Hello all, I want to create a simple batch file that would perform some Visual Studio 2008 refactoring tasks on some files. For example, I would like to call the Refactor.ExtractInferface command on a given file. Can I do this from the command line? Is there a better way (I am sure there is) of doing this? Thanks, H.

    Read the article

  • App.config vs. .ini files

    - by Jakob Gade
    I'm reviewing a .NET project, and I came across some pretty heavy usage of .ini files for configuration. I would much prefer to use app.config files instead, but before I jump in and make an issue out of this with the devs, I wonder if there are any valid reasons to favor .ini files over app.config?

    Read the article

  • DurandalJS vs AngularJS?

    - by Zach
    I'm looking for a JS framework to build an SPA and found DurandalJS and AngularJS. Can anyone compare these two frameworks? I did find many articles that compares AngularJS and KnockoutJS, and they say AngularJS is more than data binding, so I think DurandalJS may be the one to compare. I did a little research on AngularJS, it is good but one thing is bad: the $ prefix does not work when minified, although there is an ugly workaround. And someone said Twitter Bootstrap does not work well with it (I didn't check). For DurandalJS, I still cannot find the samples (http://durandaljs.com/documentation/Understanding-the-Samples/), so it's hard to say. PS: are they working well with TypeScript? Best regards, Zach

    Read the article

  • To see javascript object properties and functions with intellisense in VS.NET 2008

    - by uzay95
    I am creating classes in external files.And adding them with <script src='../js/clsClassName.js' type='text/javascript'></script> tags. When i created an object from this class I can't access its props, and functions with intellisense. Is there any way to achieve this? Sample javascript class: // clsClassName.js function ClassName(_param1, _param2, _param3) { this.Prop1 = _param1; this.Prop1 = _param2; this.Prop3 = _param3; } ClassName.prototype.f_Add = function(fBefore, fSuccess, fComplete, fError) { } ClassName.prototype.f_Delete = function(fBefore, fSuccess, fComplete, fError) { } Any help would be greatly appreciated...

    Read the article

  • Storing date and time as epoch vs native datetime format in the database

    - by zakovyrya
    For most of my tasks I find it much easier to work with date and time in the epoch format: it's trivial to calculate timespan or determine if some event happened before or after another, I don't have to deal with time-zone issues if the data comes from different geographical sources, in case of scripting languages what I usually get from database when I request a datetime-typed column is a string that I need to parse in order to work with it. This list can go on, but for me in order to keep my code portable that's enough to ditch database's native datetime format and store date and time as integer. What do you guys think?

    Read the article

  • Serializing WPF RichTextBox to XAML vs RTF

    - by chaiguy
    I have a RichTextBox and need to serialize its content to my database purely for storage purposes. It would appear that I have a choice between serializing as XAML or as RTF, and am wondering if there are any advantages to serializing to XAML over RTF, which I would consider as more "standard". In particular, am I losing any capability by serializing to RTF instead of XAML? I understand XAML supports custom classes inside the FlowDocument, but I'm not currently using any custom classes (though the potential for extensibility might be enough reason to use XAML).

    Read the article

  • Serializing WPF RichTextBox to XAML vs RTF

    - by chaiguy
    I have a RichTextBox and need to serialize its content to my database purely for storage purposes. It would appear that I have a choice between serializing as XAML or as RTF, and am wondering if there are any advantages to serializing to XAML over RTF, which I would consider as more "standard". In particular, am I losing any capability by serializing to RTF instead of XAML? I understand XAML supports custom classes inside the FlowDocument, but I'm not currently using any custom classes (though the potential for extensibility might be enough reason to use XAML).

    Read the article

  • MySQL vs PHP when retrieving a random item

    - by andufo
    Hi, which is more efficient (when managing over 100K records): A. Mysql SELECT * FROM user ORDER BY RAND(); of course, after that i would already have all the fields from that record. B. PHP use memcached to have $cache_array hold all the data from "SELECT id_user FROM user ORDER BY id_user" for 1 hour or so... and then: $id = array_rand($cache_array); of course, after that i have to make a MYSQL call with: SELECT * FROM user WHERE id_user = $id; so... which is more efficient? A or B?

    Read the article

  • Maven "Module" vs "Project"

    - by Ricket
    I'm a beginner at Maven and I've played with it from a command line point of view a little, so now I was trying to use it in Eclipse; I installed the m2eclipse plugin to do so. But I'm stumped from the very beginning! Apparently I've missed a bit of terminology somewhere along the line. I can't keep track of all these new Maven terms... What is a Maven Project, and what is a Maven Module? These are my options when creating a new project in the Maven category in Eclipse.

    Read the article

  • What is your ratio Bug fixing vs Enhancements ?

    - by Newtopian
    In the spirit of this question I wanted to have a sense of what is the proportion of time split between fixing bugs and implementing new features. If possible try to give an estimate for the product as a whole as opposed to individual developer stats and try to make an average over the course of a typical year. Do provide a general descriptive of the product/project to allow comparison. Specifically : Maturity of project Is it still actively developed or strictly in maintenance ? Size estimate of the product/project Size of team developing it (all inclusive) What is your team score on the Joel test. Ex : approx 80% time spent bug fixes 20% new stuff Mature software (20 years old) Actively developed 1.5M Line of Text, approx 700k - 900k LOC 12-15 actively coding in it. we got 5/12 for sure, some would say 7/12.

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • Access to variables in an asp.net user control vs an include file

    - by user204588
    I've asked this question before but couldn't get the answer I was looking for so I'm going to try it again. I'm translating pages from old asp to asp.net and I don't want to do this any other way so I really just want to know if this can be done. In asp, I'd assign a variable on one page <% myVar = "something" %> I could assign many variables here and then use an include <!--#include file="Test2.aspx"--> then in test2 file, I could access all the variables without having to pass all the variables into the control or declaring them again, like <% myVar = "something else" %> I want to do this the dot net way but I have some thirty variables on the page and i don't want to pass a bunch into the user control and I don't want to have to keep declaring the same variables. All I really want to know is if there is some way to replicate the behavior above in asp.net?

    Read the article

  • Symfony/Doctrine: Unserialize in action vs template

    - by Tom
    Hi, Can anyone tell me why calling "unserialize" works fine in an action but gives an offset error in a template? It's basically possible to unserialize a database text result into a variable in an action and pass it to template, in which case it displays fine: $this->clean = unserialize($this->raw); <?php echo $clean ?> But not if called directly in a template: <?php echo unserialize($raw) ?> Would be interested in knowing why this is so and whether there's some workaround. Thanks.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Sqs vs SqsGen2 using RightScale right_aws GEM

    - by Fitter Man
    I'm trying to use the right_aws (1.10.0) GEM with Rails, and I've reduced my problem to a 3-line irb session. The following works require 'rubygems' require 'right_aws' sqs = RightAws::Sqs.new("xxxxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") while this fails require 'rubygems' require 'right_aws' sqs = RightAws::SqsGen2.new("xxxxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") with NameError: uninitialized constant RightAws::SqsGen2. I see the class definition in the GEM source, the documentation is old but seems accurate, but I can't figure out what I'm doing wrong. And while you're at it, is there any reason if I'm building something new, I'd want to use the older interface?

    Read the article

  • Concepts: Channel vs. Stream

    - by hotzen
    Hello, is there a conceptual difference between the terms "Channel" and "Stream"? Do the terms require/determine, for example, the allowed number of concurrent Consumers or Producers? I'm currently developing a Channel/Stream of DataFlowVariables, which may be written by one producer and read by one consumer as the implementation is destructive/mutable. Would this be a Channel or Stream, is there any difference at all? Thanks

    Read the article

  • Getting a Method's Return Value in the VS Debugger

    - by Bullines
    Is it possible to get a method's return value in the Visual Studio debugger, even if that value isn't assigned to a local variable? For example, I'm debugging the following code: public string Foo(int valueIn) { if (valueIn > 100) return Proxy.Bar(valueIn); else return "Not enough"; } Since I'm not setting any local variables in Foo, and assuming I'm not setting a break point in whatever's calling Foo, is there a way to see what the return value is if I have a breakpoint inside of Foo (or another way)? I don't have much experience with the Autos or Intermediate windows, so I'm not sure if those are even a valid option or not.

    Read the article

  • Traditional IO vs memory-mapped

    - by Senne
    I'm trying to illustrate the difference in performance between traditional IO and memory mapped files in java to students. I found an example somewhere on internet but not everything is clear to me, I don't even think all steps are nececery. I read a lot about it here and there but I'm not convinced about a correct implementation of neither of them. The code I try to understand is: public class FileCopy{ public static void main(String args[]){ if (args.length < 1){ System.out.println(" Wrong usage!"); System.out.println(" Correct usage is : java FileCopy <large file with full path>"); System.exit(0); } String inFileName = args[0]; File inFile = new File(inFileName); if (inFile.exists() != true){ System.out.println(inFileName + " does not exist!"); System.exit(0); } try{ new FileCopy().memoryMappedCopy(inFileName, inFileName+".new" ); new FileCopy().customBufferedCopy(inFileName, inFileName+".new1"); }catch(FileNotFoundException fne){ fne.printStackTrace(); }catch(IOException ioe){ ioe.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } public void memoryMappedCopy(String fromFile, String toFile ) throws Exception{ long timeIn = new Date().getTime(); // read input file RandomAccessFile rafIn = new RandomAccessFile(fromFile, "rw"); FileChannel fcIn = rafIn.getChannel(); ByteBuffer byteBuffIn = fcIn.map(FileChannel.MapMode.READ_WRITE, 0,(int) fcIn.size()); fcIn.read(byteBuffIn); byteBuffIn.flip(); RandomAccessFile rafOut = new RandomAccessFile(toFile, "rw"); FileChannel fcOut = rafOut.getChannel(); ByteBuffer writeMap = fcOut.map(FileChannel.MapMode.READ_WRITE,0,(int) fcIn.size()); writeMap.put(byteBuffIn); long timeOut = new Date().getTime(); System.out.println("Memory mapped copy Time for a file of size :" + (int) fcIn.size() +" is "+(timeOut-timeIn)); fcOut.close(); fcIn.close(); } static final int CHUNK_SIZE = 100000; static final char[] inChars = new char[CHUNK_SIZE]; public static void customBufferedCopy(String fromFile, String toFile) throws IOException{ long timeIn = new Date().getTime(); Reader in = new FileReader(fromFile); Writer out = new FileWriter(toFile); while (true) { synchronized (inChars) { int amountRead = in.read(inChars); if (amountRead == -1) { break; } out.write(inChars, 0, amountRead); } } long timeOut = new Date().getTime(); System.out.println("Custom buffered copy Time for a file of size :" + (int) new File(fromFile).length() +" is "+(timeOut-timeIn)); in.close(); out.close(); } } When exactly is it nececary to use RandomAccessFile? Here it is used to read and write in the memoryMappedCopy, is it actually nececary just to copy a file at all? Or is it a part of memorry mapping? In customBufferedCopy, why is synchronized used here? I also found a different example that -should- test the performance between the 2: public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } I more or less see whats going on, my output looks like this: Stream Write: 653 Mapped Write: 51 Stream Read: 651 Mapped Read: 40 Stream Read/Write: 14481 Mapped Read/Write: 6 What is makeing the Stream Read/Write so unbelievably long? And as a read/write test, to me it looks a bit pointless to read the same integer over and over (if I understand well what's going on in the Stream Read/Write) Wouldn't it be better to read int's from the previously written file and just read and write ints on the same place? Is there a better way to illustrate it? I've been breaking my head about a lot of these things for a while and I just can't get the whole picture..

    Read the article

  • ejb timer service vs cron

    - by darko petreski
    Hi Ejb timer service can start some process in desired time intervals. Also we can do the same thing with cron (min 1 minute) interval. But doing it with cron we have more power on controlling, monitoring and changing the intervals. Also we can restart if needed the cron very easily by command line. Also we can add or remove lines in the cron transparently. What are the advantages of using ejb timer services over calling the ejbs from cron ? (several lines of code in the cron classes are not a problem) Regards.

    Read the article

  • Throwing exception vs returning null value with switch statement

    - by Greg
    So I have function that formats a date to coerce to given enum DateType{CURRENT, START, END} what would be the best way to handling return value with cases that use switch statement public static String format(Date date, DateType datetype) { ..validation checks switch(datetype){ case CURRENT:{ return getFormattedDate(date, "yyyy-MM-dd hh:mm:ss"); } ... default:throw new ("Something strange happend"); } } OR throw excpetion at the end public static String format(Date date, DateType datetype) { ..validation checks switch(datetype){ case CURRENT:{ return getFormattedDate(date, "yyyy-MM-dd hh:mm:ss"); } ... } //It will never reach here, just to make compiler happy throw new IllegalArgumentException("Something strange happend"); } OR return null public static String format(Date date, DateType datetype) { ..validation checks switch(datetype){ case CURRENT:{ return getFormattedDate(date, "yyyy-MM-dd hh:mm:ss"); } ... } return null; } What would be the best practice here ? Also all the enum values will be handled in the case statement

    Read the article

  • int vs size_t on 64bit

    - by MK
    Porting code from 32bit to 64bit. Lots of places with int len = strlen(pstr); These all generate warnings now because strlen() returns size_t which is 64bit and int is still 32bit. So I've been replacing them with size_t len = strlen(pstr); But I just realized that this is not safe, as size_t is unsigned and it can be treated as signed by the code (I actually ran into one case where it caused a problem, thank you, unit tests!). Blindly casting strlen return to (int) feels dirty. Or maybe it shouldn't? So the question is: is there an elegant solution for this? I probably have a thousand lines of code like that in the codebase; I can't manually check each one of them and the test coverage is currently somewhere between 0.01 and 0.001%.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >