Search Results

Search found 11897 results on 476 pages for 'dean rather'.

Page 212/476 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • Creating a bootable flash without overlayfs

    - by Septagram
    I want to create an USB stick to carry my Ubuntu everywhere around with me. It's not intended to spread Ubuntu by installing it everywhere, but rather for running my configured system on any computer I come across. So far, I went with installing Ubuntu with unetbootin, however, I have some issues with this. When installed with netbootin, the original disk image is kept intact on the flash drive, forever. Also, a file is created for persistent storage and during boot it is accessed together with the image by overlayfs. This, in my opinion, has the following problems: If system is updated regularly, then files from the image are overwritten in persistent storage, doubling their size and wasting precious space. Persistent storage has a fixed size that you have to define from the start, again, wasting precious space. I'm not 100% sure, but maybe using overlayfs makes disk access slower, and more so on the relatively slow devices. So I'd like to find another solution: either to get rid of the original image or to install Ubuntu "normally" on the separate ext2 partition, or maybe even install it in the main vfat partition on the USB stick. Suggestions?

    Read the article

  • Standards & compliances for secure web application development?

    - by MarkusK
    I am working with developers right now that write code the way they want and when i tell them to do it other way they respond that its just matter of preference how to do it and they have their way and i have mine. I am not talking about the formatting of code, but rather of way site is organized in classes and the way the utilize them. and the way they create functions and process forms etc. Their coding does not match my standards, but again they argue that its matter of preference and as long as goal achieved the can be different way's to do it. I agree but their way is proven to have bugs and we spend a lot of time going back and forth with them to fix all problems security or functionality, yet they still write same code no matter how many times i asked them to stop doing certain things. Now i am ready to dismiss them but friend of mine told me that he has same exact problem with freelance developers he work with. So i don't want to trade one bad apple for another. Question is is there some world wide (or at least europe and usa) accepted standard or compliance on how write secure web based applications. What application architecture should be for maintainable application. Is there are some general standard that can be used for any language ruby php or java govern security and functionality and quality of code? Or at least for PHP and MySQL i use for my website. So i can make them follow this strict standard and stop making excuses.

    Read the article

  • Hello PCI Council, are you listening?

    - by David Dorf
    Mention "PCI" to any retailer and you'll instantly see them take a deep breath and start looking for the nearest exit.  Nobody wants to be insecure, but few actually believe that PCI does anything more than focus blame directly on retailers.  I applaud PCI for making retailers more aware of the importance of security, but did you have to make them PAINFULLY aware?  POS vendors aren't immune to this pain either as we have to undergo lengthy third-party audits in addition to the internal secure programming programs.  There's got to be a better way. There's a timely article over at StorefrontBacktalk that discusses the inequity of PCI's rules, and also mentions that the PCI Council is accepting comments until April 15th. As a vendor, my biggest issue with PCI is that they require vendors to disclose the details of any breaches, in effect "ratting out" customers.  I don't think its a vendor's place to do this.  I'd rather have the trust of my customers so we can jointly solve the problem. Mary Ann Davidson, Oracle's Chief Security Officer, has an interesting blog posting on this very topic.  Its a bit of a long read, but I found it very entertaining and thought-provoking.  Here's an excerpt: ...heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give [the] PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. I encourage you to read the entire posting, Pain Comes Instantly, and then provide feedback to the PCI Council.

    Read the article

  • Still can't mount windows 8 drive after restart

    - by Ishai Bloch
    After following the instructions in other posts, I am still getting the same error when I try to mount my Windows 8 drive in Ubuntu 14.04 on a dual boot system. I have disabled fast start after shutdown, hybrid hibernation, and the preinstalled Asus Instant On service. I have tried restarting Windows rather than shutting down. In all cases I get the same error message, namely: Error mounting /dev/sda4 at /media/jesse/OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda4" "/media/jesse/OS"' exited with non-zero exit status 14: Windows is hibernated, refused to mount. Failed to mount '/dev/sda4': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. I did not have the same issue before upgrading from Ubuntu 12 to 14. For what it's worth my computer is supposedly a "hybrid" with an SSD drive installed, although I can't see that the SSD drive is being used at all with my present settings. Any thoughts?

    Read the article

  • Setting up Clojure Project And Sub Projects

    - by octopusgrabbus
    This is primarily a lein question about setting up a major project and its sub-projects, and is not intended to be a discussion question. Instead, I am interested in either a pointer to documentation or to a Clojure/lein best practices link. I have a municipal property assessments application that splits two master flies into different subset files, depending on whether a billing transfer is taking place or we want to batch update new accounts, rather than making our assessment department enter new accounts once in their system and then again in the tax collection system. My application is going to be large enough, that I can see a common library lein project with support functions, like splitting apart the files, and then individual lein projects that use the common library. Should the lein projects be set up at the same level and support included through the project.clj/core.clj files? Is there an advantage to creating lein new projects underneath a major project? Is there a problem with combing all functions in one project? I can probably make my one core.clj contain all flavors of the program, but coming from a C/C++ and Python background, I would prefer to have a lot of little projects.

    Read the article

  • PASS: Budget Status

    - by Bill Graziano
    Our budget situation is a little different this year than in years past.  We were late getting an initial budget approved.  There are a number of different reasons this occurred.  We had different competing priorities and the budget got pushed down the list.  And that’s completely my fault for not making the budget a higher priority and getting it completed on time. That left us with initial budget approval in early August rather than prior to June 30th.  Even after that there were a number of small adjustments that needed to be made.  And one large glaring mistake that needed to be fixed.  We had a typo in the budget that made it through twelve versions of review.  In my defense I can only say that the cell was red so of course it had to be negative!  And that’s one more mistake I can add to my long and growing list of Mistakes I’ll Never Make Again. Last week we passed a revised budget (version 17) with this corrected.  This is the version we’re cleaning up and posting to the web site this week or next.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • use subdomain on different host

    - by Roy
    I want to accomplish something that I thought was simple. My wish is as follows: I have a domainname with hosting, a WordPress multisite (with subfolder setup) installed and running: gangleri.nl. I have another domain at another host and without hosting: monas.nl I created a subdomain on gangleri.nl: monas.gangleri.nl and the domain redirects to that subdomain. Now what I want is to have monas.nl act like a website, not a website in a subdomain. I would like to have post urls as in monas.nl/posttitle. I first thought to do this with the DNS settings of Monas.nl. I now have an URL forward, CURL is not what I want and I did not manage to get A-records or CNAMEs to work. I tried using the htaccess file of the WP installation in monas.gangleri.nl. I tried 301, rewrite and whatnot, but also without success. Meanwhile, I have been reading so much that I no longer have a clue what to do. A-record doesn't sound probable, since I have no IP for the subdomain, so an A-record would point to gangleri.nl rather than using the subdomain. Also I have no idea if I should do something in the DNS settings of gangleri.nl or monas.nl, both, one of them and something somewhere else. I have the idea that I've tried everything, but the more I try and read about it, the less I can get my head around. People talking about A-records to subdomains while I can only use IPs, CNAME settings that my host doesn't support or something. Could somebody tell me if what I want is possible and if so, take me by the hand and guide me through it?

    Read the article

  • Memory limiting solutions for greedy applications that can crash OS?

    - by Hooked
    I use my computer for scientific programming. It has a healthy 8GB of RAM and 12GB of swap space. Often, as my problems have gotten larger, I exceed all of the available RAM. Rather than crashing (which would be preferred), it seems Ubuntu starts loading everything into swap, including Unity and any open terminals. If I don't catch a run-away program in time, there is nothing I can do but wait - it takes 4-5 minutes to switch to a command prompt eg. Ctrl-Alt-F2 where I can kill the offending process. Since my own stupidity is out of scope of this forum, how can I prevent Ubuntu from crashing via thrashing when I use up all of the available memory from a single offending program? At-home experiment*! Open a terminal, launch python and if you have numpy installed try this: >>> import numpy >>> [numpy.zeros((10**4, 10**4)) for _ in xrange(50)] * Warning: may have adverse effects, monitor the process via iotop or top to kill it in time. If not, I'll see you after your reboot.

    Read the article

  • Why is Ubuntu One slow to sync in 11.10, either backup or any sub-folder contents?

    - by pst007x
    I have been trying to sync my documents folder of 1.4GB, it still hasn't worked and it has been syncing for a month. The top level syncs, files and folders in the Document folders, but contents of sub-folders just hang. (Gave up and stopped syncing this folder) However,I have tried using the backup facility in 11.10, to backup to Ubuntu One.... I upgraded my HDD space in Ubuntu One. It has been going now for 24hours-ish and only backed up what looks like a couple of percent. (By the way what an excellent idea to backup to Ubuntu One, if only we could get it to actually work! :-o) The odd thing is I can sync to drop box within hours, rather than months. This is bad, and has been an issue since Ubuntu One's release. I have reported this problem and there were promises in later releases this would be fixed, but it hasn't. Canonical cannot help either... I posted on several blogs, a lot of people have the same problem but no fixes. So do I use dropbox or another service, until it is sorted, as Ubuntu does not seem to see this as an issue, I think a fix will be a long time in coming. (However,I love the potential of Ubuntu One and the integration with the OS) Yes my internet speeds are fine, etc... :-) No firewall (sudo ufw status: STATUS: INACTIVE), No Proxy, etc NB: I have raised this as a separate question to others posted here, because my question relates to Ubuntu 11.10, though I have commented elsewhere for help. Plus my question also relates to deja-dup backup to Ubuntu One. Thanks

    Read the article

  • Unable to install ubuntu on a AMD 64 bit system with a AMD Radeon HD 6670 graphics card

    - by Tom Wingrove
    I’ve been running a dual boot system (Ubuntu/Windows7) for two years or so with no problems. I recently built an AMD 64 Bit System, re-installed Windows but when I went to load Ubuntu inside Windows, hit a snag. The screen view during installation became small square blocks of colour, which obviously is a graphics drive problem. I tried various live disks both 32 & 64 bit for, Ubuntu 12.04, 11.10 & 10.10, all but Ubuntu 10.10 had the same problem. Ubuntu 10.10 loaded ok, installed the presented ATI graphics driver as usual but was left with the AMD Unsupported watermark at the bottom right of the screen. The graphics card installed in the computer is an MSI ATI Radeon HD 6670 (in effect an AMD Radeon HD 6670). I am fairly new to linux and while I can install and tweak the OS, I am rather baffled as to what to do. So my question is will an up to date ATI Driver be released in the near future for installation/live disks? Or am I going to have to downgrade my graphics card to use linux? Yours Tom Wingrove

    Read the article

  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

    - by Justin Garrison
    In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation. As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • A Great Work : ADF Architecture TV

    - by mustafakaya
    I would like to information about Oracle ADF Product Management's great work ; ADF Architecture TV. This channel has various subjects such as before start a new ADF or any software project what will you need or how can you select team member's skills, or how to implement and design an ADF projects etc. When developing with a new technology, one of the challenges for technical staff is to both learn the features of the technology and how to implement them, and also consider the broader concepts of design, engineering and architecture. Many an IT project has come undone because IT staff have been focused on the nitty gritty details of writing software, rather than looking at the "bigger picture" of how it will all go together. Oracle's "ADF Architecture TV" plans to address this issue by focusing on architectural issues and developer guidelines for writing ADF software solutions. The goal, to give ADF developers an understanding of the decisions you need to build a successful ADF application, potential architectural blueprints to choose from when putting the ADF application together, and potential best practices to take back to your development team.  You can click here for ADF Architecture TV. 

    Read the article

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Extracting Frustum Planes (Hartmann & Gribbs method)

    - by DAVco
    I have been grappling with the Hartmann/Gribbs method of extracting the Frustum planes for some time now, with little success. There doesn't appear to be a "definitive" topic or tutorial which combines all the necessary information, so perhaps this can be it First of all, I am attempting to do this in C# (For Playstation Mobile), using OpenGL style Column-Major matrices in a Right-Handed coordinate system but obviously the math will work in any language. My projection matrix has a Near plane at 1.0, Far plane at 1000, FOV of 45.0 and Aspect of 1.7647. I want to get my planes in World-Space, so I build my frustum from the View-Projection Matrix (that's projectionMatrix * viewMatrix). The view Matrix is the inverse of the camera's World-Transform. The problem is; regardless of what I tweak, I can't seem to get a correct frustum. I think that I may be missing something obvious. Focusing on the Near and Far planes for the moment (since they have the most obvious normals when correct), when my camera is positioned looking down the negative z-axis, I get two planes facing in the same direction, rather than opposite directions. If i strafe my camera left and right (while still looking along the z axis) the x value of the normal vector changes. Obviously, something is fundamentally wrong here; I just can't figure out what - maybe someone here can?

    Read the article

  • How to improve my backup strategy (rsync)?

    - by GUI Junkie
    I've seen the QAs about backup solutions, but I'm asking anyway. One because it's a personal situation I haven't solved yet, and two, because the answer can be useful for others. My situation is rather simple. I have two computers with two users and one external hard-drive. I want to sync/backup a shared directory. Currently I use rsync with the -azvu options to sync to the external drive. My problem is the round-trip. All deleted files are restored! Using rsync I'm doing Computer A --> External disk --> Computer A Computer B --> External disk --> Computer B (I should probably do External disk -- Computer A as a last step) I've seen 'bup' mentioned and other QA talk about dropbox + rsync... Another option is maybe to delete files from rsync? Can my running backup strategy be improved in some other way?

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • Tip: Keeping the ADF Mobile PDF Guide up to date

    - by Chris Muir
    This is a little tip for customers using Oracle's ADF Mobile. If you're like me, it's possible you don't rely on the online HTML version of the Mobile Developer's Guide for ADF, but rather download a PDF version of the file to use locally (look to the "PDF" link to the top right of the guide).  For me the convenience of the PDF is it's faster, I can search the whole document easily, I can split read the document across two pages on my home monitor, if I lose my internet connection the document is still available, and it's easy to read on my iPad (especially on long haul flights to the US across the Pacific where there is no internet connection!). The trigger point for me to download the Oracle PDF documentation has always been on a new point release of JDeveloper.  However in the case of ADF Mobile, as an extension to JDeveloper it is releasing at a much faster and independent schedule to JDeveloper and this includes updates to the documentation. As such the 11.1.2.4.0 ADF Mobile PDF guide you have locally might be out of date and you should take the opportunity to download the latest version.  This is also particularly important for ADF Mobile as not only are many new features being added for each release and included in the new documentation, but the guide is under rapid improvement to clarify much of what has been written to date.  Our documentation teams are super responsive to suggestions on how to improve the guides and this often shows per point release. How do you tell you've the latest guide? Look to the document part number which right now is "E24475-03".  This is a unique ID per release for the document, the first part being the document number, and the part after the dash the revision number.  If the website document number has a higher revision number, time to download a new up to date PDF. One last thing to share, you can follow the ADF Mobile guide document manager Brian Duffield on Twitter to keep abreast of updates. Image courtesy of Stuart Miles / FreeDigitalPhotos.net

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • How to deal with a valuable person going in all directions?

    - by JVerstry
    I am working with someone producing user content to be included in a software application. He is not a coder, but rather an expert in his field, sharing the knowledge. His contribution, taken piece by piece is great, but he goes in all directions and has issues producing work sequentially. He works on 25 pieces of content at the same time, and as soon as he reads something 'interesting', he wants to rewrite some of his stuff to improve the quality of it. He does not converge naturally. He collects tons of informations, produces some valuable stuff, but in a completely unstructured way. We addressed this issue with him some time ago and in order to try to solve it, we created a document with the 100 items he had to fill. Problem is, it does not seem to work very well. How to deal with those people and collect information? I was thinking about a new technique: ask him to send his bits, out of order, little by little, as soon as they are ready, and keep a list of what remains to be done, and show him that list to give him direction. This situation is stressing the hell out of me. If his production was not good, I would not be trying so hard to make this work. If you have experience to share, it is welcome.

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >