Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 731/1005 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Adjusting the rate of movement of different objects on the same timer

    - by theUg
    I have a series of objects moving along the straight lines. I want to implement slight changes of velocity of each of the object. Constraint is existing model of animation. I am new to this, and not sure if it is the best way to accommodate varying speeds, but what do I know? It is a Java application that repaints the panel every time the timer expires. Timer is set via swing.Timer object that is set by timer delay constant. Every time the game is stepped objects’ coordinates advanced by an increment constant. Most of the objects are of the same class. Is there fairly easy way to refactor existing system to allow changing velocity for an individual object? Is there some obvious common solution I am not aware about? Idea I am having right now is to set timer delay fairly small, and only move objects every so many cycles of animation so that the apparent speed can be adjusted by varying how often they get moved. But that seems fairly involved, and I do not think it is the most elegant solution in terms of performance what with repainting the whole frame every 3-5 milliseconds. Can it be done by advancing the objects so many (varying) times during the certain interval (let’s say 35ms for something like 28fps), and use repaint() method to redraw just individual object? Do I need to mess with pausing animation for smoothness at higher redraw rates? Is it common practise to check for collision at larger step interval, but draw animation a lot more frequently?

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Herding Cats - That's My Job....

    - by user709270
    Written by Mike Schmitz - Sr. Director, Program Management Oracle JD Edwards  I remember seeing a super bowl commercial several years ago showing some well dressed people on the African savanna herding cats. I remember turning to the people I was watching the game with and telling them, “You just watched my job description”. Releasing software is a multi-facetted undertaking. In addition to making sure the code changes are complete, you also need to make sure the other key parts of a release are ready. For example when you have a question about the software, will the person on the other end of the phone be ready to answer your question? If you need training on that cool new piece of functionality, will there be an online training course ready for you to review? If you want to read about how the software is supposed to function, is there a user manual available? Putting all the release pieces together so they are available at the same time is what the JD Edwards Program Management team does. It is my team’s job to work with all the different functional teams so when a release is made generally available you have all the things you need to be successful. The JD Edwards Program Management team uses an internal planning tool called the Release Process Model (RPM) to ensure all deliverables are accounted for in a release. The RPM makes sure all the release deliverables are ready at the correct time and in the correct format. The RPM really helps all the functional teams in JD Edwards know what release deliverables they are accountable for and when they are to be delivered. It is my team’s job to make sure everyone understands what they need to do and when they need to deliver. We then make sure they are all on track to deliver on-time and in the right format. It is just that some days this feels like herding cats.

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Subdomain redirect to WWW

    - by manix
    I have the domain example.com and the test.example.com running on apache server. For some reason when I try to visit test.example it is redirected to www.test.example and by consequence a Server not found error is displayed in the browser. Both .htaccess (root and subdomain folder) files are empty. Additional facts I have another subdomain xyz.example.com pointed to public_html/xyz directory with some content inside (index.html with "hello world message") and it works fine if I use xyz.example.com instead of www.xyz.example.com. So, can you help me to point to the right direction in order. I have a vps and I am able to change any file if is required. Below you can find my virtual host configuration. <VirtualHost xx.xxx.xxx:80> ServerName test.example.com ServerAlias www.test.example.com DocumentRoot /home/example/public_html/test ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/test.example.com combined CustomLog /usr/local/apache/domlogs/test.example.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ScriptAlias /cgi-bin/ /home/example/public_html/test/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/example/test.example.com/*.conf" </VirtualHost>

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Why would Java app make RPC call to itself?

    - by amphibient
    I am working with a multithreaded homegrown multi-module app in my new job. We use the the Thrift protocol to communicate RPC calls between different stand-alone applications in a distributed system. One of them listens on multiple ports and I just noticed that it actually makes an RPC call to itself from one thread invoked from one socket it listens to (web service call) to another port within the same app. I verified that it could accomplish the same thing if it just went and directly called the method that the remote procedure ultimately invokes as it is all within the same application, same JVM. To make it even more mysterious, the call is completely synchronous, i.e. no callbacks involved. The first thread totally sits and waits until it makes a call across the wire to itself and comes back. Now, I am perplexed why anybody would do it this way. It seems like calling somebody on the phone that sits in the same room as you do. Can anybody provide an explanation why the developer before me would come up with the above mentioned model? Maybe there is a reason and I am missing something.

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • How to efficiently protect part of an application with a license

    - by Patrick
    I am working on an application that has many functional parts. When a customer buys the application, he buys the standard functionality, but he can also buy some additional elements of the application for an additional price. All of the elements are part of the same application executable. A license key is used to indicate which of the elements should be accessible in the application. Some of the elements can be easily disabled if the user didn't pay for it. These are typically the modules that you can access via the application's menu. However, some elements give more problems: What if a part of the data model is related to an optional part? Do I build up these data structures in my application so the rest of my application can just assume they're always there? Or do I don't build them, and add checks in the rest of may application? What if some optional part is still useful to perform some internal tasks, but I don't want to expose it to the user externally? What if the marketing responsible wants to make a standard part now an optional part? In all of my application I assume that that part is present, but if it becomes optional, I should add checks on it everywhere in the application. I have some ideas on how to solve some of the problems (e.g. interfaces with dual implementations: one working implementation, and one that is activated if the optional part is not activated). Do you know of any patterns that can be used to solve this kind of problem? Or do you have any suggestions on how to handle this licensing problem? Thanks.

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • Fusion CRM Release 7 RCDs and TOIs Now Available!

    - by Richard Lefebvre
    Fusion CRM Release 7 Release Content Documents (RCD) and Transfer of Information (TOI) presentations are now available. In addition, you can find 245 new or changed product features for Release 7 on Oracle Product Features. All the new RCDs and TOIs can be found on the Fusion Learning Center: Customer Relationship Management TOIs - Customer Center, Define Segmentation Strategy, Enterprise Contracts, Oracle Social Network, Sales, and Territory Management Business Process Model (BPM) RCDs - Customer Service, Marketing, Order Fulfillment, and Sales Financials BPM RCDs - Asset Lifecycle Management, Cash and Treasury Management, and Financial Control and Reporting Human Capital Management TOIs - Workforce Development, Compensation, Benefits, Worker Performance, Workforce Profiles, Enterprise Structures, Talent Review, Manage Transaction and Batch Processing, Delete HCM Storage Data, and Load Batch Data BPM RCDs - Compensation Management, Enterprise Information Management, Workforce Deployment, and Workforce Development Procurement TOI - Requisitions BPM RCD - Procurement Project Portfolio Management TOIs - Project Resources, Evaluate and Assign Resources, Maintain Resource Assignments, Manage Resource Demand, Manage Resource Supply, Manage Resource Utilization and Analytics, Project Management, Set Up Project Management BPM RCD - Project Management Supply Chain Management TOIs - Manage New Product Definition and Approval, Manage Product Change Orders, Product Hub, Define Item Class BPM RCDs - Materials Management and Logistics, Product Management and Supply Chain Planning Partners and customers can access the content from the following locations: Partner access: BPM RCDs and TOIs Oracle Partner Network Fusion Learning Center New Feature RCDs Oracle Product Features Customer access: TOIs My Oracle Support (Note:1528594.1) BPM RCDs My Oracle Support (Note:1559828.1) New Feature RCDs Oracle Product Features

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • What type of pattern would be used in this case

    - by Admiral Kunkka
    I want to know how to tackle this type of scenario. We are building a person's background, from scratch, and I want to know, conceptually, how to proceed with a secure object pattern in both design and execution... I've been reading on Factory patterns, Model-View-Controller types, Dependency injection, Singleton approaches... and I can't seem to grasp or 'fit' these types of designs decisions into what I'm trying to do.. First and foremost, I started with having a big jack-of-all-trades class, then I read some more, and some tips were to make sure your classes only have a single purpose.. which makes sense and I started breaking down certain things into other classes. Okay, cool. Now I'm looking at dependency injection and kind of didn't really know what's going on. Example/insight of what kind of heirarchy I need to accomplish... class Person needs to access and build from a multitude of different classes. class Culture needs to access a sub-class for culture benefits class Social needs to access class Culture, and other sub-classes class Birth needs to access Social, Culture, and other sub-classes class Childhood/Adolescence/Adulthood need to access everything. Also, depending on different rolls, this class heirarchy needs to create multiple people as well, such as Family, and their backgrounds using some, if not all, of these same classes. Think of it as a people generator, all random, with backgrounds and things that happen to them. Ageing, death of loved ones, military careers, e.t.c. Most of the generation is done randomly, making calls to a mt_rand function to pick from most of the selections inside the classes, guaranteeing the data to be absolutely random. I have most of the bulk-data down, and was looking for some insight from fellow programmers, what do you think?

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • Is there a name for the Builder Pattern where the Builder is implemented via interfaces so certain parameters are required?

    - by Zipper
    So we implemented the builder pattern for most of our domain to help in understandability of what actually being passed to a constructor, and for the normal advantages that a builder gives. The one twist was that we exposed the builder through interfaces so we could chain required functions and unrequired functions to make sure that the correct parameters were passed. I was curious if there was an existing pattern like this. Example below: public class Foo { private int someThing; private int someThing2; private DateTime someThing3; private Foo(Builder builder) { this.someThing = builder.someThing; this.someThing2 = builder.someThing2; this.someThing3 = builder.someThing3; } public static RequiredSomething getBuilder() { return new Builder(); } public interface RequiredSomething { public RequiredDateTime withSomething (int value); } public interface RequiredDateTime { public OptionalParamters withDateTime (DateTime value); } public interface OptionalParamters { public OptionalParamters withSeomthing2 (int value); public Foo Build ();} public static class Builder implements RequiredSomething, RequiredDateTime, OptionalParamters { private int someThing; private int someThing2; private DateTime someThing3; public RequiredDateTime withSomething (int value) {someThing = value; return this;} public OptionalParamters withDateTime (int value) {someThing = value; return this;} public OptionalParamters withSeomthing2 (int value) {someThing = value; return this;} public Foo build(){return new Foo(this);} } } Example of how it's called: Foo foo = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).build(); Foo foo2 = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).withSomething2(3).build();

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • Is it possible to use a VB master page to cover an entirely separate directory written in C#?

    - by Jason Weber
    I have a company website written in vb.net. There are 5 master pages. I recently began utilizing a forum application, also asp.net 4.0, but this one is written in C#. My forum directory is domain.com/knowledgebase/. Is there any possible way to take one of my vb.net master pages and somehow integrate into the /knowledgebase/ directory? Here's what's currently This is what's in the top of every page in my site: <%@ Page Title="USS Vision Inc." Language="VB" MasterPageFile="~/homepage.master" AutoEventWireup="false" CodeFile="default.aspx.vb" Inherits="_default" culture="auto" meta:resourcekey="PageResource1" uiculture="auto" Debug="true" %> This is what's in my /knowledgebase/ directory: <%@ Page Language="C#" AutoEventWireup="true" ValidateRequest="false" Inherits="YAF.ForumPageBase" culture="auto" uiculture="auto" %> <%@ Register TagPrefix="YAF" Assembly="YAF" Namespace="YAF" %> <script runat="server"> Is it somehow possible to use, for instance, homepage.master in the /knowledgebase/ directory? If so, how would I accomplish this? Thanks for any guidance anybody can offer!

    Read the article

  • Telerik Extensions for ASP.NET MVC Q1 2010 is out!

    Today we shipped the Q1 2010 release out of the door. Go download the open source or if you are a licensed customer download it from your client.net account.   What is new on the MVC front is: No longer in BETA New components TreeView, NumericTextBox components, Calendar, DatePicker New features Grid grouping, Grid editing, Grid localization Using jQuery 1.4.2 Lots of bug fixes   The rest is mentioned in the release notes.   Breaking changes from Q1 2010 Futures!!! There is one breaking change since the Q1 2010 Futures release. The Toolbar method of the GridBuilder has been renamed to ToolBar: <%= Html.Telerik().Grid(Model) //.Toolbar(commands => commands.Insert()) <- Old .ToolBar(commands => commands.Insert()) // <- New%> For a complete list of changed API of the grid check the changes and backward compatibility help topic.   By the way if you still havent cast your vote for a new product or feature do it now! We will soon start development for Q2 2010.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Telerik Extensions for ASP.NET MVC Q1 2010 is out!

    Today we shipped the Q1 2010 release out of the door. Go download the open source or if you are a licensed customer download it from your client.net account.   What is new on the MVC front is: No longer in BETA New components TreeView, NumericTextBox components, Calendar, DatePicker New features Grid grouping, Grid editing, Grid localization Using jQuery 1.4.2 Lots of bug fixes   The rest is mentioned in the release notes.   Breaking changes from Q1 2010 Futures!!! There is one breaking change since the Q1 2010 Futures release. The Toolbar method of the GridBuilder has been renamed to ToolBar: <%= Html.Telerik().Grid(Model) //.Toolbar(commands => commands.Insert()) <- Old .ToolBar(commands => commands.Insert()) // <- New%> For a complete list of changed API of the grid check the changes and backward compatibility help topic.   By the way if you still havent cast your vote for a new product or feature do it now! We will soon start development for Q2 2010.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >