Search Results

Search found 8190 results on 328 pages for 'separate'.

Page 216/328 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

  • Java word scramble game [closed]

    - by Dan
    I'm working on code for a word scramble game in Java. I know the code itself is full of bugs right now, but my main focus is getting a vector of strings broken into two separate vectors containing hints and words. The text file that the strings are taken from has a colon separating them. So here is what I have so far. public WordApp() { inputRow = new TextInputBox(); inputRow.setLocation(200,100); phrases = new Vector <String>(FileUtilities.getStrings()); v_hints = new Vector<String>(); v_words = new Vector<String>(); textBox = new TextBox(200,100); textBox.setLocation(200,200); textBox.setText(scrambled + "\n\n Time Left: \t" + seconds/10 + "\n Score: \t" + score); hintBox = new TextBox(200,200); hintBox.setLocation(300,400); hintBox.hide(); Iterator <String> categorize = phrases.iterator(); while(categorize.hasNext()) { int index = phrases.indexOf(":"); String element = categorize.next(); v_words.add(element.substring(0,index)); v_hints.add(element.substring(index +1)); phrases.remove(index); System.out.print(index); } The FileUtilities file was given to us by the pofessor, here it is. import java.util.*; import java.io.*; public class FileUtilities { private static final String FILE_NAME = "javawords.txt"; //------------------ getStrings ------------------------- // // returns a Vector of Strings // Each string is of the form: word:hint // where word contains no spaces. // The words and hints are read from FILE_NAME // // public static Vector<String> getStrings ( ) { Vector<String> words = new Vector<String>(); File file = new File( FILE_NAME ); Scanner scanFile; try { scanFile = new Scanner( file); } catch ( IOException e) { System.err.println( "LineInput Error: " + e.getMessage() ); return null; } while ( scanFile.hasNextLine() ) { // read the word and follow it by a colon String s = scanFile.nextLine().trim().toUpperCase() + ":"; if( s.length()>1 && scanFile.hasNextLine() ) { // append the hint and add to collection s+= scanFile.nextLine().trim(); words.add(s); } } // shuffle Collections.shuffle(words); return words; } }

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • Loading another domain's content in a modal iframe - acceptable?

    - by user568458
    Is it okay to load another page in an iframe in a modal pop-up window - in terms of legal and ethical standards around displaying 3rd party content? I remember a few years ago there was controversy and a debate about whether it was okay to load another domain's page content on your domain in a full-width iframe, with your site providing a masthead with controls for favouriting, linking etc (e.g. like StumbleUpon). I seem to recall that the consensus was, that it was okay so long as you were clearly in no way claiming ownership of the 3rd party content or attempting to modify the content and so long as there was a 'go to site' button or equivalent; and that sites could ask you to exclude them, but generally speaking, it's an acceptable practice. How acceptable would it be considered to be to load another site's page within a modal (lightbox-like) popup box (following all the above principles: clear attribution and a prominent button that kills the iframe and gives them the 3rd party original)? My expectation would be that it would follow the same principles, and be acceptable so long as these conditions were met. Note that I'm asking about the likely legitimate responses of the 3rd party sites and possible legal position, not about usability or UX. I'm aware that this should never ever ever ever ever be the standard way external links are loaded, and that 99% of the time linking to external content like this would be terrible for usability. My specific use case is one of those 1% of cases where loading a separate page in this tab actually wouldn't be the expected behaviour of a link: an interactive data visualisation tool that also acts as a 'browser' of external content (science papers underlying the data it navigates). All other links within the interactive will change something while staying on the same page. If the user clicked one of these external links by mistake (as people often do, even when they are clearly, noisily labelled) and then had to back-button back, they would lose their fine-grained position in the interactive tool (jquery bbq hashchanges being not appropriate for all elements of the tool). New window/tab will simply open the target page on the 3rd party domain. Opening a new window/tab would also be an alternative option (and has its own disadvantages) - my question is, whether this is an alternative that could be considered (in terms of acceptable practice around intellectual property etc), irrespective of which option is best for UX: which is something we'll decide the proper way, based on actual UX testing.

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • SEO effects of intermix of WP blog, custom PHP site and FB app game

    - by melbournetechlover
    We're a melbourne tech company in the process of building a custom site in PHP. We plan to launch a "pre-launch" page which is also custom coded (CSS3 on twitter bootstrap framework + HTML5 front end and PHP back end). On that site will be a link to a blog - the idea behind this is to build up ranking for a variety of relevant keywords prior to the full site going live (given the majority of the site is a member only community anyway so the blog is really the main way we'll be able to execute on-site SEO. Ideally, we would like to install wordpress in a subdirectory on our servers and just customise the header to look the same as the landing page of the website. But some questions and concerns... Is there any detrimental effect on SEO efforts in having two separate systems (one custom PHP, the other an installation of wordpress) to manage the blog vs the rest of the site? Are there any benefits or detriments to installing on a sub domain such as blog.sitename.com vs. sitename.com/blog. My preference would be sitename.com/blog as it feels neater - but open to suggestions based on knowledge of Google preferences. Separately, we are building a Facebook app which is under another site name. Again because we are launching this app first, from an SEO perspective, would it actually be better to run it from a sub domain on the main site - e.g. gamename.mainsitename.com instead of on app.gamename.com? Currently we have it on app.gamename.com, but if there are SEO benefits to moving it to the other domain and server then we'll do it. Basically we don't want to have our SEO efforts divided - will Google algorithms prefer two sites heavily referring traffic, or is it better to focus our efforts on one. I guess that's the crux of the issue. But the other one is - does Google care about traffic accessing a page built for the Facebook app iFrame - does that count toward rankings? Sorry I hope these questions aren't too complex - but we're in the tech world every day and still can't seem to find a good answer to these ones...hence I'm taking to the forums!! Free beer for whoever can give me a solid answer!

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • Oracle Application in DMZ (Demilitarized Zone)

    - by PRajkumar
     Business Needs Large Organizations want to expose their Oracle Application services outside their private network (HTTP/HTTPS and SSL). Usually these exposures must exist to promote external communication. So they want to separate an external network from directly referencing an internal network   Business Challenges ·         Business does not want to compromise with security information ·         Business cannot expose internal domain or internal URL information   Business Solution DMZ is the solution of this problem. In Oracle application we can achieve this by following way –   ·         Oracle Application consists of fleet nodes (FND_NODES) so first decide which node have to expose to public ·         To expose the node to public use the profile “Node Trust Level” ·         Set node to Public/Private (Normal -> private, External -> public) ·         Set "Responsibility Trust Level" profile to decide whether to expose Application Responsibility to inside or outside firewall         Solution Features   ·         Exposed web services can be accessed by both internal and external users ·         Configurable and can be very easily rolled out ·         Internal network and business data is secured from outside traffic ·         Unauthorized access to internal network from outside is prohibited ·         No need for VPN and Secure FTP server   Benefits  ·       Large Organizations having Oracle Application can expose their web services like (HTTP/HTTPS and SSL) to the internet without compromise with security information and without exposing their internal domain   Possible Week Points  ·         If external firewall is compromised, then external application server is also compromised, exposing an attack on E-Business Suite database ·         There’s nothing to prevent internal users from attacking internal application server, also exposing an attack on E-Business Suite database   Reference Links  ·         https://blogs.oracle.com/manojmadhusoodanan/tags/dmz

    Read the article

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • How do you conquer the challenge of designing for large screen real-estate?

    - by Berin Loritsch
    This question is a bit more subjective, but I'm hoping to get some new perspective. I'm so used to designing for a certain screen size (typically 1024x768) that I find that size to not be a problem. Expanding the size to 1280x1024 doesn't buy you enough screen real estate to make an appreciable difference, but will give me a little more breathing room. Basically, I just expand my "grid size" and the same basic design for the slightly smaller screen still works. However, in the last couple of projects my clients were all using 1080p (1920x1080) screens and they wanted solutions to use as much of that real estate as possible. 1920 pixels across provides just under twice the width I am used to, and the wide screen makes some of my old go to design approaches not to work as well. The problem I'm running into is that when presented with so much space, I'm confronted with some major problems. How many columns should I use? The wide format lends itself to a 3 column split with a 2:1:1 split (i.e. the content column bigger than the other two). However, if I go with three columns what do I do with that extra column? How do I make efficient use of the screen real estate? There's a temptation to put everything on the screen at once, but too much information actually makes the application harder to use. White space is important to help make sense of complex information, but too much makes related concepts look too separate. I'm usually working with web applications that have complex data, and visualization and presentation is key to making sense of the raw data. When your user also has a large screen (at least 24"), some information is out of eye sight and you need to move the pointer a long distance. How do you make sure everything that's needed stays within the visual hot points? Simple sites like blogs actually do better when the width is constrained, which results in a lot of wasted real estate. I kind of wonder if having the text box and the text preview side by side would be a big benefit for the admin side of that type of screen? (1:1 two column split). For your answers, I know almost everything in design is "it depends". What I'm looking for is: General principles you use How your approach to design has changed I'm finding that i have to retrain myself how to work with this different format. Every bump in resolution I've worked through to date has been about 25%: 640 to 800 (25% increase), 800 to 1024 (28% increase), and 1024 to 1280 (25% increase). However, the jump from 1280 to 1920 is a good 50% increase in space--the equivalent from jumping from 640 straight to 1024. There was no commonly used middle size to help learn lessons more gradually.

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • 3 Monitors, Ubuntu 12.04, Gnome 3, 2 nvidia cards, WITH xrandr or xinerama?

    - by Josh
    Ok. I have been banging my head against the wall for over a week now trying to get 3 monitors to work. I have: 1 - Nvidia 8600 GT 512MB PCIEx16 1 - Nvidia GT 240 1GB PCIEx16 They are not running in SLI (obviously). I have tried to use everything from tutorials to a few templates, all the way up to nvidia-settings, etc etc etc.. From what I hear, Xinerama doesnt like gnome 3 because of the compositing, although I have read a lot about using Xrandr instead, and getting the compositing working, but alas, I cannot. It always either crashes X and I have to replace the xorg.conf with my backup, or it defaults to the gnome-classic desktop, and on top of that, when it does default, it keeps adding more and more panels. Basically, I want to be able to use all 3 monitors (yes, just like in windows) to drag and drop from different windows. I have xorg-edit, but I am still not too sure on how to set this up? Is there any way to: A Get compositing working with 3 monitors, 2 nvidia cards, xinerama, and gnome 3? or BUse twinview with 3 monitors (I have heard it can be done by manually editing xorg.conf) or CSet up Xrandr to draw all 3 monitors with compositing. or DUse separate X for each monitor, and be able to use gnome with compositing, as well as drag between all 3 or EANYTHING. lol. I just want this to work. Any help you can provide would be greatly appreciated. BTW, I am running an ubuntu mini install with gnome. Everything works great but this. I can run it fine with 2 monitors and compositing, but not with 3. Also, what is the best GUI tool for editing xorg.conf? Im not finding anything that is up to date at all, and also is understandable by humans. haha. Im actually an engineer by trade, and have been working with computers a LONG time, but this xorg.conf stuff is really confusing the crap out of me. lol Thanks!

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • How to use multiple search keys?

    - by user32565
    I have a database wherein the files are named abcd100.00b, abcd101.00b, etc. I need a code where when the user enters abcd separate then 100 to 110, all the files with the name abcd and in the range 100 to 110 should get displayed now the following code can display only the first four characters. How do I implement this? <?php //capture search term and remove spaces at its both ends if the is any $searchTerm = trim($_GET['keyname']) ; //check whether the name parsed is empty if($searchTerm == "rinex_file") { echo "Enter name you are searching for."; exit(); } if($searchTerm == "rinex_file") { echo "Enter name you are searching for."; exit(); } //database connection info $host = "localhost"; //server $db = "rinex"; //database name $user = "m"; //dabases user name $pwd = "c"; //password //connecting to server and creating link to database $link = mysqli_connect($host, $user, $pwd, $db); //MYSQL search statement $query = "SELECT * FROM rinexo WHERE rinex_file LIKE '%$searchTerm%'"; $results = mysqli_query($link, $query) ; /* check whethere there were matching records in the table by counting the number of results returned */ if(mysqli_num_rows($results) >= 1){ echo '<table border="1"> <tr> <th>rinex version</th> <th>program</th> <th>date</th> <th>maker name</th> <th>maker number</th> <th>observer</th> <th>agency</th> <th>position_X_Y_Z</th> </tr>'; while($row = mysqli_fetch_array($results)){ echo '<tr> <td>'.$row['rinex_version'].'</td> <td>'.$row['pgm'].'</td> <td>'.$row['date'].'</td> <td>'.$row['marker_name'].'</td> <td>'.$row['marker_no'].'</td> <td>'.$row['observer'].'</td> <td>'.$row['agency'].'</td> <td>'.$row['position_X_Y_Z'].'</td> </tr>'; } echo '</table>'; }else{ echo "There was no matching record for the name " . $searchTerm; }

    Read the article

  • Applying the Knuth-Plass algorithm (or something better?) to read two books with different length and amount of chapters in parallel

    - by user147133
    I have a Bible reading plan that covers the whole Bible in 180 days. For the most of the time, I read 5 chapters in the Old Testament and 1 or 2 (1.5) chapters in the New Testament each day. The problem is that some chapters are longer than others (for example Psalm 119 which is 7 times longer than a average chapter in the Bible), and the plan I'm following doesn't take that in count. I end up with some days having a lot more to read than others. I thought I could use programming to make myself a better plan. I have a datastructure with a list of all chapters in the bible and their length in number of lines. (I found that the number of lines is the best criteria, but it could have been number of verses or number of words as well) I then started to think about this problem as a line wrap problem. Think of a chapter like a word, a day like a line and the whole plan as a paragraph. The "length" of a word (a chapter) is the number of lines in that chapter. I could then generate the best possible reading plan by applying a simplified Knuth-Plass algorithm to find the best breakpoints. This works well if I want to read the Bible from beginning to end. But I want to read a little from the new testament each day in parallel with the old testament. Of course I can run the Knuth-Plass algorithm on the Old Testament first, then on the New Testament and get two separate plans. But those plans merged is not a optimal plan. Worst-case days (days with extra much reading) in the New Testament plan will randomly occur on the same days as the worst-case days in the Old Testament. Since the New Testament have about 180*1.5 chapters, the plan is generally to read one chapter the first day, two the second, one the third etc... And I would like the plan for the Old Testament to compensate for this alternating length. So I will need a new and better algorithm, or I will have to use the Knuth-Plass algorithm in a way that I've not figured out. I think this could be a interesting and challenging nut for people interested in algorithms, so therefore I wanted to see if any of you have a good solution in mind.

    Read the article

  • Practices for domain models in Javascript (with frameworks)

    - by AndyBursh
    This is a question I've to-and-fro'd with for a while, and searched for and found nothing on: what're the accepted practices surrounding duplicating domain models in Javascript for a web application, when using a framework like Backbone or Knockout? Given a web application of a non-trivial size with a set of domain models on the server side, should we duplicate these models in the web application (see the example at the bottom)? Or should we use the dynamic nature to load these models from the server? To my mind, the arguments for duplicating the models are in easing validation of fields, ensuring that fields that expected to be present are in fact present etc. My approach is to treat the client-side code like an almost separate application, doing trivial things itself and only relying on the server for data and complex operations (which require data the client-side doesn't have). I think treating the client-side code like this is akin to separation between entities from an ORM and the models used with the view in the UI layer: they may have the same fields and relate to the same domain concept, but they're distinct things. On the other hand, it seems to me that duplicating these models on the server side is a clear violation of DRY and likely to lead to differing results on the client- and server-side (where one piece gets updated but the other doesn't). To avoid this violation of DRY we can simply use Javascripts dynamism to get the field names and data from the server as and when they're neeed. So: are there any accepted guidelines around when (and when not) to repeat yourself in these situations? Or this a purely subjective thing, based on the project and developer(s)? Example Server-side model class M { int A DateTime B int C int D = (A*C) double SomeComplexCalculation = ServiceLayer.Call(); } Client-side model function M(){ this.A = ko.observable(); this.B = ko.observable(); this.C = ko.observable(); this.D = function() { return A() * C(); } this.SomeComplexCalculation = ko.observalbe(); return this; }l M.GetComplexValue = function(){ this.SomeComplexCalculation(Ajax.CallBackToServer()); }; I realise this question is quite similar to this one, but I think this is more about almost wholly untying the web application from the server, where that question is about doing this only in the case of complex calculation.

    Read the article

  • C Programming arrays, I dont understand how I would go about making this program, If anyone can just guide me through the basic outline please :) [on hold]

    - by Rashmi Kohli
    Problem The temperature of a car engine has been measured, from real-world experiments, as shown in the table and graph below: Time (min) Temperature (oC) 0 20 1 36 2 61 3 68 4 77 5 110 Use linear regression to find the engine’s temperature at 1.5 minutes, 4.3 minutes, and any other time specified by the user. Background In engineering, many times we measure several data points in an experiment, but then we need to predict a value that we have not measured which lies between two measured values, such as the problem statement above. If the relation between the measured parameters seems to be roughly linear, then we can use linear regression to find the relationship between those parameters. In the graph of the problem statement above, the relation seems to be roughly linear. Hence, we can apply linear regression to the above problem. Assuming y {y0, y1, …yn-1} has a linear relation with x {x0, x1, … xn-1}, we can say that: y = mx+b where m and b can be found with linear regression as follows: For the problem in this lab, using linear regression gives us the following line (in blue) compared to the measured curve (in red). As you can see, there is usually a difference between the measured values and the estimated (predicted) values. What linear regression does is to minimize those differences and still give us a straight line (blue). Other methods, such as non-linear regression, are also possible to achieve higher accuracy and better curve fitting. Requirements Your program should first print the table of the temperatures similar to the way it’s printed in the problem statement. It should then calculate the temperature at minute 1.5 and 4.3 and show the answers to the user. Next, it should prompt the user to enter a time in minutes (or -1 to quit), and after reading the user’s specified time it should give the value of the engine’s temperature at that time. It should then go back to the prompt. Hints •Use a one dimensional array to store the temperature values given in the problem statement. •Use functions to separate tasks such as calculating m, calculating b, calculating the temperature at a given time, printing the prompt, etc. You can then give your algorithm as well as you pseudo code per function, as opposed to one large algorithm diagram or one large sequence of pseudo code.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >