Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 209/550 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Extract files in rar archive without having some previous files.

    - by Aria
    I have a big rar archive which is split into 700mb parts. I only have part 5 and 6 and there is a 40mb file in there that I wanna extract using Winrar. I know the whole file is stored in part 5 because when I open part 5, that file gets listed (and many other files). But I can't extract any of them, cause it asks for previous archive parts which I'm sure it really doesn't need. Is there a way to do that?

    Read the article

  • Revert Skype contact list to be like it was before the latest update

    - by mickburkejnr
    My Skype client has been updated to the latest version (on Windows 7). In the new version it's grouped all my Facebook contacts in along with my normal Skype contacts. I've tried to get used to it but it's being an absolute pain when you start a conversation with a contact, only to find your speaking to them through their Facebook account and not their Skype. Is there any way I can revert my Skype client to split the Skype contacts from the Facebook contacts like it was before this update?

    Read the article

  • CRC error when extracting to SSD from 2nd HDD

    - by gbn
    Hello I have a large RAR file (split up) containing an ISO on my 2nd HDD When I extract it: to the same HDD, it's OK to the system/OS SSD, I get CRC errors I've checked memory, run memtests, checked wires etc I have no other issues; only with this one RAR file Any ideas please?

    Read the article

  • Which linux distro to use? Hyper-V hosting.

    - by TomTom
    Not a linux geek I am looking for a recommendation which Linux distro to use for a hyper-v based hosting envfironment (so access to the enlightment part easily is important). I Would also love to have something that alloows me to split operating system read only files and user files easily without too much tinkering onto two discs, so that the boot disc can be read only. (reasoning: This would allow me to set up a read only disc that is shared between multiple server instances, with the server disc only containing basically the user files)

    Read the article

  • What are optimal strategies for using mapreduce and other applications on the same server?

    - by user45532
    I have two applications that I need to run continuously to process data. 1.) An app that processes and aggregates information from sources 2.) A mapreduce workflow* that processes the above info I've thought about either getting vps hosting or getting my own inexpensive server and using xen to split the resources of the server. Getting a quad core box with 2 GB of Ram seems a lot cheaper than the grid options I've seen at slicehost, rackspace and others...

    Read the article

  • Excel, Pivot table, Relocate Filters on the worksheet

    - by Maria
    Hej, In my worksheet where i have my pivot table i have many different filters to chose between. For the view of the eye it doesnt really look nice and i want to be able to maybe split tha t long list of filters into a few shorter once. But i cant figure out how to do this. Ive seen where i can move the whole pivot table, but then its all included and as one unsplitable piece.... anyone knows if this is possible??

    Read the article

  • get username from active directory in C#.net [closed]

    - by Jahangeer Ahmed
    get username from active directory in C#.net ManagementObjectSearcher Usersearcher = new ManagementObjectSearcher("Select * From Win32_ComputerSystem"); ManagementObjectCollection Usercollection = Usersearcher.Get(); string[] sep = { "\" }; string[] UserNameDomain = Usercollection.Cast().First()["UserName"].ToString().Split(sep, StringSplitOptions.None); null reference exception

    Read the article

  • Something remapped <C-w> in Vim, how do I reclaim it?

    - by blackrobot
    I've been adding a lot of things to my Vim configuration, and apparently one of the plugins I've installed is reclaiming . Whenever I use that key combination, it shows this error: "E784: Cannot close last tab page". Is there a way to reclaim 's functionality without disabling the plugins? I mainly use it for switching between view panes in a split window.

    Read the article

  • Sharing a serial port between two processes

    - by peterrus
    As it is not possible to directly share a serial port between two processes using Linux, I am looking for another way to achieve this, I have heard about socat but could not find a concrete example of how to realize the following: Split one physical serial port (/dev/ttyUSB0) into two virtual ports, one for reading and one for writing, as one process only needs to send data, and one only needs to receive data. I can no modify the sending application unfortunately.

    Read the article

  • Solaris 10 very slow ssh file transfers

    - by user133080
    Trying to copy a few TBs betweek Solaris 10 u9 systems A single scp only seems to be able to transfer around 120MB/min, over a 1GB network. If I run multiple scp copies, each one will do 120MB/min, so it is not the network as far as I can see. Any hints on how to tweak the Solaris settings to open a bigger pipe. Have the same problem with another piece of software that unfortunately does not seem to be able to be split into separate processes.

    Read the article

  • Best way to set up servers for .NET performance

    - by msigman
    Assume we have 3 physical servers and let's say we are only interested in performance, and not reliability. Is it better to give each server a specific function or make them all duplicates and split the traffic between them? In other words dedicate 1 as DB server, 1 as web server, and 1 as reporting server/data warehouse, or better to put all three services on each server and use them as web farm?

    Read the article

  • is it a good idea to change a recovery partition from primary to logical? [HP laptop]

    - by DiegoDD
    I have a new HP laptop, model dv6-6c85la, with 1TB hard drive, and it has 4 primary partitions, like this: |<- system [199 MB] -|<- c: [899.8 GB] -|<- d:(recovery) [27.5 GB] -|<- e:(hp_tools) [4 GB] -| I wanted to make another partition, splitting "C" which is the main partition, into TWO partitions, and leave the rest as it is. but it doesn't let me because they are already 4 primary partitions (the ones in the diagram). I read somewhere, that i could in fact split C into 2 partitions, but only if the adjacent partition (in this case d:(recovery) is converted into a "logical" partition. That way, the new unallocated part taken from C, and the recovery partition, would each be logical, "inside" an extended partition (right???) As i understand, the resulting partitions would be: primary (system, no letter), primary (c:), extended [ logical (x:) | logical(d:recovery) ], primary (e: hp_tools) "x" being the new one. am i correct? My question is, if i do convert the recovery partition to logical (and thus, it is inside an extended partition adjacent to the new "x:" one), would i have any problems when in case of a disaster i would like to restore the system using the now logical instead of primary RECOVERY partition? Or it is completely safe to change it to logical? My main concern is because i think i may need to be primary so the recovery can proceed in boot time? Or i am completely wrong? how does the recovery process happens? I also understand that i can simply create recovery media, in DVDs, and then even i would be able to delete that recovery partition completely, but as of now, i don't want to do that. I may create the disks, but i don't want to delete the partition, simply because it would be a lot faster and easier to recover from a hard drive than disks. Wrapping up: if i change a recovery partition from primary to logical, will the system still be capable of using it to recover? or it NEEDS to be primary to work? The whole point is that i want to split C:, but as things are, i cant directly, i'd need to change the recovery partition to logical. Or is there another way? thanks.

    Read the article

  • (C++) Linking with namespaces causes duplicate symbol error

    - by user577072
    Hello. For the past few days, I have been trying to figure out how to link the files for a CLI gaming project I have been working on. There are two halves of the project, the Client and the Server code. The client needs two libraries I've made. The first is a general purpose game board. This is split between GameEngine.h and GameEngine.cpp. The header file looks something like this namespace gfdGaming { // struct sqr_size { // Index x; // Index y; // }; typedef struct { Index x, y; } sqr_size; const sqr_size sPos = {1, 1}; sqr_size sqr(Index x, Index y); sqr_size ePos; class board { // Prototypes / declarations for the class } } And the CPP file is just giving everything content #include "GameEngine.h" type gfdGaming::board::functions The client also has game-specific code (in this case, TicTacToe) split into declarations and definitions (TTT.h, Client.cpp). TTT.h is basically #include "GameEngine.h" #define TTTtar "localhost" #define TTTport 2886 using namespace gfdGaming; void* turnHandler(void*); namespace nsTicTacToe { GFDCON gfd; const char X = 'X'; const char O = 'O'; string MPhostname, mySID; board TTTboard; bool PlayerIsX = true, isMyTurn; char Player = X, Player2 = O; int recon(string* datHolder = NULL, bool force = false); void initMP(bool create = false, string hn = TTTtar); void init(); bool isTie(); int turnPlayer(Index loc, char lSym = Player); bool checkWin(char sym = Player); int mainloop(); int mainloopMP(); }; // NS I made the decision to put this in a namespace to group it instead of a class because there are some parts that would not work well in OOP, and it's much easier to implement later on. I have had trouble linking the client in the past, but this setup seems to work. My server is also split into two files, Server.h and Server.cpp. Server.h contains exactly: #include "../TicTacToe/TTT.h" // Server needs a full copy of TicTacToe code class TTTserv; struct TTTachievement_requirement { Index id; Index loc; bool inUse; }; struct TTTachievement_t { Index id; bool achieved; bool AND, inSameGame; bool inUse; bool (*lHandler)(TTTserv*); char mustBeSym; int mustBePlayer; string name, description; TTTachievement_requirement steps[safearray(8*8)]; }; class achievement_core_t : public GfdOogleTech { public: // May be shifted to private TTTachievement_t list[safearray(8*8)]; public: achievement_core_t(); int insert(string name, string d, bool samegame, bool lAnd, int lSteps[8*8], int mbP=0, char mbS=0); }; struct TTTplayer_t { Index id; bool inUse; string ip, sessionID; char sym; int desc; TTTachievement_t Ding[8*8]; }; struct TTTgame_t { TTTplayer_t Player[safearray(2)]; TTTplayer_t Spectator; achievement_core_t achievement_core; Index cTurn, players; port_t roomLoc; bool inGame, Xused, Oused, newEvent; }; class TTTserv : public gSserver { TTTgame_t Game; TTTplayer_t *cPlayer; port_t conPort; public: achievement_core_t *achiev; thread threads[8]; int parseit(string tDat, string tsIP); Index conCount; int parseit(string tDat, int tlUser, TTTplayer_t** retval); private: int parseProto(string dat, string sIP); int parseProto(string dat, int lUser); int cycleTurn(); void setup(port_t lPort = 0, bool complete = false); public: int newEvent; TTTserv(port_t tlPort = TTTport, bool tcomplete = true); TTTplayer_t* userDC(Index id, Index force = false); int sendToPlayers(string dat, bool asMSG = false); int mainLoop(volatile bool *play); }; // Other void* userHandler(void*); void* handleUser(void*); And in the CPP file I include Server.h and provide main() and the contents of all functions previously declared. Now to the problem at hand I am having issues when linking my server. More specifically, I get a duplicate symbol error for every variable in nsTicTacToe (and possibly in gfdGaming as well). Since I need the TicTacToe functions, I link Client.cpp ( without main() ) when building the server ld: duplicate symbol nsTicTacToe::PlayerIsX in Client.o and Server.o collect2: ld returned 1 exit status Command /Developer/usr/bin/g++-4.2 failed with exit code 1 It stops once a problem is encountered, but if PlayerIsX is removed / changed temporarily than another variable causes an error Essentially, I am looking for any advice on how to better organize my code to hopefully fix these errors. Disclaimers: -I apologize in advance if I provided too much or too little information, as it is my first time posting -I have tried using static and extern to fix these problems, but apparently those are not what I need Thank you to anyone who takes the time to read all of this and respond =)

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • JRuby wrong element type class java.lang.String(array contains char) related to JAVA_HOME

    - by Daryl
    I am on Ubuntu x64 bit running: java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8) (6b18-1.8-0ubuntu1) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) and jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2010-02-11 6586) (OpenJDK 64-Bit Server VM 1.6.0_18) [amd64-java] I have this code running on my Windows 7 computer at home. I recently copied over my whole folder over to Ubuntu, installed java, jruby, and associated gems but I get this error when I run my main file: jruby run.rb test =================Processing FREDERICKSBURG_1.1======================= ERROR IN TESTING wrong element type class java.lang.String(array contains char) /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' /home/daryl/Desktop/work/Code/geografikos/lib/models/page.rb:103:in `sentences' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 The focus of the error is: ERROR IN TESTING wrong element type class java.lang.String(array contains char) Everything works fine on my windows machine. I figured I was getting this error because I did not have JAVA_HOME set however I added this to bashrc as: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk and have confirmed: echo $JAVA_HOME /usr/lib/jvm/java-1.6.0-openjdk I can produce a similar error by removing my JAVA_HOME variable on windows: =================Processing FREDERICKSBURG_1.3======================= ERROR IN TESTING cannot convert instance of class org.jruby.RubyString to char C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' C:/work/Code/geografikos/lib/models/page.rb:103:in `sentences' C:/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' C:/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' C:/work/Code/geografikos/lib/statistics.rb:105:in `each' C:/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 It is obviously not exactly the same but I have a feeling this has to do with the java path. You can probably derive from the error that I am just trying to convert a ruby variable to java using to_java. This works fine on my windows machine and I have confirmed the gems are the same but I don't think this has to do with gems. I lied. I changed my JAVA_HOME back on my windows machine and this error still occurs. So now the code doesn't run on either machine. I recently installed git on my windows machine and added the code to a repository. But I haven't really done anything with it. All it said was it will convert all LF to CRLF...That shouldn't change anything though should it? Any ideas on why I am now getting these errors? I haven't changed anything on my windows machine in months except for installing git.

    Read the article

  • Design Question - how do you break the dependency between classes using an interface?

    - by Seth Spearman
    Hello, I apologize in advance but this will be a long question. I'm stuck. I am trying to learn unit testing, C#, and design patterns - all at once. (Maybe that's my problem.) As such I am reading the Art of Unit Testing (Osherove), and Clean Code (Martin), and Head First Design Patterns (O'Reilly). I am just now beginning to understand delegates and events (which you would see if you were to troll my SO questions of recent). I still don't quite get lambdas. To contextualize all of this I have given myself a learning project I am calling goAlarms. I have an Alarm class with members you'd expect (NextAlarmTime, Name, AlarmGroup, Event Trigger etc.) I wanted the "Timer" of the alarm to be extensible so I created an IAlarmScheduler interface as follows... public interface AlarmScheduler { Dictionary<string,Alarm> AlarmList { get; } void Startup(); void Shutdown(); void AddTrigger(string triggerName, string groupName, Alarm alarm); void RemoveTrigger(string triggerName); void PauseTrigger(string triggerName); void ResumeTrigger(string triggerName); void PauseTriggerGroup(string groupName); void ResumeTriggerGroup(string groupName); void SetSnoozeTrigger(string triggerName, int duration); void SetNextOccurrence (string triggerName, DateTime nextOccurrence); } This IAlarmScheduler interface define a component that will RAISE an alarm (Trigger) which will bubble up to my Alarm class and raise the Trigger Event of the alarm itself. It is essentially the "Timer" component. I have found that the Quartz.net component is perfectly suited for this so I have created a QuartzAlarmScheduler class which implements IAlarmScheduler. All that is fine. My problem is that the Alarm class is abstract and I want to create a lot of different KINDS of alarm. For example, I already have a Heartbeat alarm (triggered every (int) interval of minutes), AppointmentAlarm (triggered on set date and time), Daily Alarm (triggered every day at X) and perhaps others. And Quartz.NET is perfectly suited to handle this. My problem is a design problem. I want to be able to instantiate an alarm of any kind without my Alarm class (or any derived classes) knowing anything about Quartz. The problem is that Quartz has awesome factories that return just the right setup for the Triggers that will be needed by my Alarm classes. So, for example, I can get a Quartz trigger by using TriggerUtils.MakeMinutelyTrigger to create a trigger for the heartbeat alarm described above. Or TriggerUtils.MakeDailyTrigger for the daily alarm. I guess I could sum it up this way. Indirectly or directly I want my alarm classes to be able to consume the TriggerUtils.Make* classes without knowing anything about them. I know that is a contradiction, but that is why I am asking the question. I thought about putting a delegate field into the alarm which would be assigned one of these Make method but by doing that I am creating a hard dependency between alarm and Quartz which I want to avoid for both unit testing purposes and design purposes. I thought of using a switch for the type in QuartzAlarmScheduler per here but I know it is bad design and I am trying to learn good design. If I may editorialize a bit. I've decided that coding (predefined) classes is easy. Design is HARD...in fact, really hard and I am really fighting feeling stupid right now. I guess I want to know if you really smart people took a while to really understand and master this stuff or should I feel stupid (as I do) because I haven't grasped it better in the couple of weeks/months I have been studying. You guys are awesome and thanks in advance for your answers. Seth

    Read the article

  • non blocking client server chat application in java using nio

    - by Amith
    I built a simple chat application using nio channels. I am very much new to networking as well as threads. This application is for communicating with server (Server / Client chat application). My problem is that multiple clients are not supported by the server. How do I solve this problem? What's the bug in my code? public class Clientcore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void coreClient() { System.out.println("please enter the text"); BufferedReader stdin=new BufferedReader(new InputStreamReader(System.in)); SocketChannel sc = null; try { sc = SocketChannel.open(); sc.configureBlocking(false); sc.connect(new InetSocketAddress(8888)); int i=0; while (!sc.finishConnect()) { } for(int ii=0;ii>-22;ii++) { System.out.println("Enter the text"); String HELLO_REQUEST =stdin.readLine().toString(); if(HELLO_REQUEST.equalsIgnoreCase("end")) { break; } System.out.println("Sending a request to HelloServer"); ByteBuffer buffer = ByteBuffer.wrap(HELLO_REQUEST.getBytes()); sc.write(buffer); } } catch (IOException e) { e.printStackTrace(); } finally { if (sc != null) { try { sc.close(); } catch (Exception e) { e.printStackTrace(); } } } } public void run() { try { coreClient(); } catch(Exception ej) { ej.printStackTrace(); }}} public class ServerCore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void run() { try { coreServer(); } catch(Exception ej) { ej.printStackTrace(); } } private void coreServer() { try { ServerSocketChannel ssc = ServerSocketChannel.open(); try { ssc.socket().bind(new InetSocketAddress(8888)); while (true) { sckt_manager=SelectorProvider.provider().openSelector(); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); register_server(ssc,SelectionKey.OP_ACCEPT); if (sc == null) { } else { System.out.println("Received an incoming connection from " + sc.socket().getRemoteSocketAddress()); printRequest(sc); System.err.println("testing 1"); String HELLO_REPLY = "Sample Display"; ByteBuffer buffer = ByteBuffer.wrap(HELLO_REPLY.getBytes()); System.err.println("testing 2"); sc.write(buffer); System.err.println("testing 3"); sc.close(); }}} catch (IOException e) { e.printStackTrace(); } finally { if (ssc != null) { try { ssc.close(); } catch (IOException e) { e.printStackTrace(); } } } } catch(Exception E) { System.out.println("Ex in servCORE "+E); } } private static void printRequest(SocketChannel sc) throws IOException { ReadableByteChannel rbc = Channels.newChannel(sc.socket().getInputStream()); WritableByteChannel wbc = Channels.newChannel(System.out); ByteBuffer b = ByteBuffer.allocate(1024); // read 1024 bytes while (rbc.read(b) != -1) { b.flip(); while (b.hasRemaining()) { wbc.write(b); System.out.println(); } b.clear(); } } public void register_server(ServerSocketChannel ssc,int selectionkey_ops)throws Exception { ssc.register(sckt_manager,selectionkey_ops); }} public class HelloClient { public void coreClientChat() { Clientcore t=new Clientcore(); new Thread(t).start(); } public static void main(String[] args)throws Exception { HelloClient cl= new HelloClient(); cl.coreClientChat(); }} public class HelloServer { public void coreServerChat() { ServerCore t=new ServerCore(); new Thread(t).start(); } public static void main(String[] args) { HelloServer st= new HelloServer(); st.coreServerChat(); }}

    Read the article

  • (iOS) UI Automation AlertPrompt button/textField accessiblity

    - by lipd
    I'm having a bit of trouble with UI Automation (the built in to iOS tool) when it comes to alertView. First off, I'm not sure where I can set the accessibilityLabel and such for the buttons that are on the alertView. Secondly, although I am not getting an error, I can't get my textField to actually set the value of the textField to something. I'll put up my code for the alertView and the javaScript I am using for UI Automation. UIATarget.onAlert = function onAlert(alert) { // Log alerts and bail, unless it's the one we want var title = alert.name(); UIALogger.logMessage("Alert with title '" + title + "' encountered!"); alert.logElementTree(); if (title == "AlertPrompt") { UIALogger.logMessage(alert.textFields().length + ''); target.delay(1); alert.textFields()["AlertText"].setValue("AutoTest"); target.delay(1); return true; // Override default handler } else return false; } var target = UIATarget.localTarget(); var application = target.frontMostApp(); var mainWindow = application.mainWindow(); mainWindow.logElementTree(); //target.delay(1); //mainWindow.logElementTree(); //target.delay(1); var tableView = mainWindow.tableViews()[0]; var button = tableView.buttons(); //UIALogger.logMessage("Num buttons: " + button.length); //UIALogger.logMessage("num Table views: " + mainWindow.tableViews().length); //UIALogger.logMessage("Number of cells: " + tableView.cells().length); /*for (var currentCellIndex = 0; currentCellIndex < tableView.cells().length; currentCellIndex++) { var currentCell = tableView.cells()[currentCellIndex]; UIALogger.logStart("Testing table option: " + currentCell.name()); tableView.scrollToElementWithName(currentCell.name()); target.delay(1); currentCell.tap();// Go down a level target.delay(1); UIATarget.localTarget().captureScreenWithName(currentCell.name()); //mainWindow.navigationBar().leftButton().tap(); // Go back target.delay(1); UIALogger.logPass("Testing table option " + currentCell.name()); }*/ UIALogger.logStart("Testing add item"); target.delay(1); mainWindow.navigationBar().buttons()["addButton"].tap(); target.delay(1); if(tableView.cells().length == 5) UIALogger.logPass("Successfully added item to table"); else UIALogger.logFail("FAIL: didn't add item to table"); Here's what I'm using for the alertView #import "AlertPrompt.h" @implementation AlertPrompt @synthesize textField; @synthesize enteredText; - (id)initWithTitle:(NSString *)title message:(NSString *)message delegate:(id)delegate cancelButtonTitle:(NSString *)cancelButtonTitle okButtonTitle:(NSString *)okayButtonTitle withOrientation:(UIInterfaceOrientation) orientation { if ((self == [super initWithTitle:title message:message delegate:delegate cancelButtonTitle:cancelButtonTitle otherButtonTitles:okayButtonTitle, nil])) { self.isAccessibilityElement = YES; self.accessibilityLabel = @"AlertPrompt"; UITextField *theTextField; if(orientation == UIInterfaceOrientationPortrait) theTextField = [[UITextField alloc] initWithFrame:CGRectMake(12.0, 45.0, 260.0, 25.0)]; else theTextField = [[UITextField alloc] initWithFrame:CGRectMake(12.0, 30.0, 260.0, 25.0)]; [theTextField setBackgroundColor:[UIColor whiteColor]]; [self addSubview:theTextField]; self.textField = theTextField; self.textField.isAccessibilityElement = YES; self.textField.accessibilityLabel = @"AlertText"; [theTextField release]; CGAffineTransform translate = CGAffineTransformMakeTranslation(0.0, 0.0); [self setTransform:translate]; } return self; } - (void)show { [textField becomeFirstResponder]; [super show]; } - (NSString *)enteredText { return [self.textField text]; } - (void)dealloc { //[textField release]; [super dealloc]; } @end Thanks for any help!

    Read the article

  • How do i return integers from a string ?

    - by kannan.ambadi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Suppose you are passing a string(for e.g.: “My name has 1 K, 2 A and 3 N”)  which may contain integers, letters or special characters. I want to retrieve only numbers from the input string. We can implement it in many ways such as splitting the string into an array or by using TryParse method. I would like to share another idea, that’s by using Regular expressions. All you have to do is, create an instance of Regular Expression with a specified pattern for integer. Regular expression class defines a method called Split, which splits the specified input string based on the pattern provided during object initialization.     We can write the code as given below:   public static int[] SplitIdSeqenceValues(object combinedArgs)         {             var _argsSeperator = new Regex(@"\D+", RegexOptions.Compiled);               string[] splitedIntegers = _argsSeperator.Split(combinedArgs.ToString());               var args = new int[splitedIntegers.Length];               for (int i = 0; i < splitedIntegers.Length; i++)                 args[i] = MakeSafe.ToSafeInt32(splitedIntegers[i]);                           return args;         }    It would be better, if we set to RegexOptions.Compiled so that the regular expression will have performance boost by faster compilation.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Happy Programming  :))   

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • 10gR2 10.2.0.4 Certified with EBS 12 on Windows Itanium x64

    - by Steven Chan
    Oracle Database 10g Release 2 version 10.2.0.4 is now certified with Oracle E-Business Suite Release 12 (12.0.4 or higher, 12.1.1 or higher) on the Windows Itanium x64 (64-bit) platform. The operating system supported on this platform is Windows Server 2003. This is a 'database-tier only' certification (previously known as a 'split configuration database tier' certification) where the application tier must be on a different fully certified E-Business Suite R12 platform.This 'database-tier only' platform was previously certified with 12.0 and 10.2.0.3 - customers can now apply the 12.1.1 Maintenance Pack to upgrade their application tier to 12.1.1 while running the 10gR2 database on this platform.

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >