Search Results

Search found 30414 results on 1217 pages for 'project failure'.

Page 48/1217 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Windows Server 2008 R2 and Keyboards

    - by Brian
    Hello, I have a machine with WIN server 2008 R2 installed, when the machine boots up, it says keyboard failure. The keyboard I had was from an older machine and was PS/2, I got a PS/2 to USB converter in order for it to work, but it says keyboard failure and doesn't work. Is it because it's pretty old and that's it, or does it have to be USB? I'm going to look into a new one, but want to make sure I don't get this issue again... Thanks.

    Read the article

  • How to troubleshoot a service failure?

    - by AngryHacker
    I get a GPF dialog box out of the blue fairly often (like about 2 hours after I turn on the computer). It basically says that svchost.exe had a failure... (see the corresponding Event Log entry below). Event Type: Error Event Source: Application Error Event Category: (100) Event ID: 1000 Date: 5/18/2010 Time: 7:41:16 PM User: N/A Computer: DKHA-IPSA Description: Faulting application svchost.exe, version 5.1.2600.5512, faulting module ole32.dll, version 5.1.2600.5512, fault address 0x0004eaa9. Shortly after this error pops up, the computer pretty much grinds to a halt (e.g. some UI elements on the desktop simply do not respond). And I have to do a hard reboot. How do I troubleshoot this type of thing? P.S. The PC has all the latest patches and nothing is missing in the Device Manager.

    Read the article

  • OpenLDAP server logs filled with "TLS negotiation failure"

    - by WildVelociraptor
    I recently migrated an old OpenLDAP setup to a newer server, with a more robust certificate setup. Currently, most hosts are required to verify the cert matches the host: tls_checkpeer yes TLS_REQCERT always In the server logs, there are multiple occurences of: Nov 6 10:45:08 <servername> slapd[1773]: conn=2785646 fd=35 closed (TLS negotiation failure) These errors appear from multiple hosts, but there don't seem to be any issues actually logging into those servers with an LDAP account. Does anyone know what would cause these errors? The server is running Ubuntu 12.04.2, and OpenLDAP version 2.4.28. The cert was generated using GnuTLS.

    Read the article

  • Disk boot failure in Windows 7 64-bit after installing the latest NVIDIA drivers

    - by Domchi
    I successfully installed the newest NVIDIA drivers (275.33) in Windows 7 64-bit and rebooted afterwards just in case. After the reboot, I got an error about missing MBR. I disconnected the slave disk so that Windows doesn't get confused and got this message: DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER It seems that Windows doesn't recognize the main disk anymore. I booted from Windows install disk but the main disk doesn't get listed as possible Windows locations to repair, and I can't get to it from the recovery prompt. The BIOS does recognize it, and I'm able to see it if I run diskpart - however "detail disk" in diskpart says that there are no volumes on the disk. I also tried bootrec /FixMbr without effect, and bootrec /FixBoot which gives the error message: Element not found. What else can I do? Why would diskpart say there are no volumes on the disk?

    Read the article

  • Logon Failure: the target account name is incorrect after making a ghost image of a server

    - by cop1152
    I recently replaced a failing SCSI drive in a Windows 2000 server with an IDE drive. I made an image of the SCSI drive and Ghosted it. The purpose of the machine was to give out DHCP at one location and host a couple of files. When I restarted the machine with the new drive, DHCP appears to be working fine, but I cannot get to any of the shares. Instead, I get the following message when attempting to navigate using Explorer. Logon Failure: the target account name is incorrect It appears that this machine is not communicating with the main domain controller. Changes to user accounts (performed on the domain controller) are not replicated on this machine.

    Read the article

  • boot failure trying to install linux distro from a cd

    - by jdamae
    Hoping someone can help me get this installation from a cd going. I'm using an older pc: hp pavillion 2.66Ghz, 512RAM with a BIOS revision of 6/30/2003. I reclaimed some an older drive (Seagate ST340810A) that seems to be working as its recognized in the bios (auto-detected) I downloaded a mini.iso of ubuntu 10.10 that I want to install and burned the image to a CD for install. My boot sequence is: First Boot Device [CDROM] and I disabled devices 2-4 so I can just force it to read first from the CDROM. This old pc also has a separate CD writer which is a Sec.Slave, so the Sec.Master is the Toshiba DVD/ROM DSM-171 drive where I placed the burned cd with the linux distro. So, with these settings I cannot get it to boot. I get the "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER". Where do I go from here? Thanks.

    Read the article

  • Windows Login Failure

    - by Chris Bateson
    I'm getting an error in the Event Viewer, which is also generating a lot of Logon Failure messages on our syslog server. Pretty much stuck on how to resolve. EventID: 536 Logon Type: 3 Reason: The NetLogon component is not active This is for a Windows Server 2003 system. I have checked here We're using Shavlik Protect 9 to scan and deploy patches. Shavlik stores the credentials for the systems and uses those stored credentials to deploy patches. This system is able to scan and deploy to other systems on the network using those credentials and no errors are generated. When installing to the local system that Shavlik is physically on then this error is generated. Whats interesting is that it doesn't generate during a scan, and the patches install fine. We've contacted Shavlik to get the response that they are unable to help since it's a Microsoft error. Has anyone seen this?

    Read the article

  • DISK BOOT FAILURE after upgrading power supply

    - by Phenom
    After upgrading my power supply, I get the following error message when trying to boot into Windows 7. DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER My Windows 7 installation is on a SATA hard drive. I'm able to fix this problem if I hook up my IDE hard drive, then it boots the SATA hard drive fine. I don't like this solution though because then that means my IDE hard drive is drawing power even though it isn't being used. Why would a newer power supply need the IDE hard drive hooked up just to boot into the SATA hard drive? There are no boot files on the IDE hard drive; it is completely empty. My old power supply did not need it hooked up in order to boot the SATA hard drive.

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • Eliminate single point of failure for webservers?

    - by George Bailey
    I know in DNS, that each of the DNS servers will be tried to see if they will respond I know in email that in the event of a failure it will go to the next one in the list or it will hold the mail for a period of time As far as I know,, in webservers,, the browser will get one of the webserver IP addresses and try it and if it fails it will give up. Is this correct? If so,, then the only way to direct traffic away from a failed IP address would be with the DNS servers.. and even that would not update immediately?

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

  • "Disk boot failure" error after installing Windows 7 on SSD

    - by Tony_Henrich
    I have a system with 3 SATA drives which runs fine. Got a new SSD drive and wanted to install a fresh Windows-7 on it. So I removed the boot drive and replaced it with the SSD drive. Installed Windows and when it was done, rebooted and now I get "Disk boot failure. Insert system disk and press enter" error message. I reinstall again and still same message. Removed the SSD and put back the original drive and I got the same message!! I checked the BIOS and things look good. Something is wrong. Two questions: 1- Why isn't the new Windows booting from the SSD? 2- Why isn't the machine booting using the previous working configuration anymore, after removing the SSD? I did connect it during the second Windows installation but it was the last drive in the SATA connector. Would Windows installer mess with its MBR sector?

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • Ping or accessing WAN IP from LAN results in failure on only one box

    - by ComputerUserGuy
    Morning/evening gents. I purchased a radical domain name today to set up a name for my services and to set up SSL. I configured the SSL fine and all but when I went to my website I couldn't connect. I can connect to the site with any other device in my house and my friend can connect to it as well from outside of the LAN. I am hosting the services with my computer and I can't access the service. Whenever I ping it using the command prompt I get a result of "General Failure.". It saddens me that they couldn't make a better message as it kind of brings me down. I'm not sure what's the deal here as I have all of my firewalls down and my ports are forwarded. Running Windows 7. Thanks for the assistance chaps.

    Read the article

  • Avoiding circular project/assembly references in Visual Studio with statically typed dependency conf

    - by svnpttrssn
    First, I want to say that I am not interested in debating about any non-helpful "answers" to my question, with suggestions to putting everything in one assembly, i.e. there is no need for anyone to provide webpages such as the page titled with "Separate Assemblies != Loose Coupling". Now, my question is if it somehow (maybe with some Visual Studio configuration to allow for circular project dependencies?) is possible to use one project/assembly (I am here calling it the "ServiceLocator" assembly) for retrieving concrete implementation classes, (e.g. with StructureMap) which can be referred to from other projects, while it of course is also necessary for the the ServiceLocator itself to refer to other projects with the interfaces and the implementations ? Visual Studio project example, illustrating the kind of dependency structure I am talking about: http://img10.imageshack.us/img10/8838/testingdependencyinject.png Please note in the above picture, the problem is how to let the classes in "ApplicationLayerServiceImplementations" retrieve and instantiate classes that implement the interfaces in "DomainLayerServiceInterfaces". The goal is here to not refer directly to the classes in "DomainLayerServiceImplementations", but rather to try using the project "ServiceLocator" to retrieve such classes, but then the circular dependency problem occurrs... For example, a "UserInterfaceLayer" project/assembly might contain this kind of code: ContainerBootstrapper.BootstrapStructureMap(); // located in "ServiceLocator" project/assembly MyDomainLayerInterface myDomainLayerInterface = ObjectFactory.GetInstance<MyDomainLayerInterface>(); // refering to project/assembly "DomainLayerServiceInterfaces" myDomainLayerInterface.MyDomainLayerMethod(); MyApplicationLayerInterface myApplicationLayerInterface = ObjectFactory.GetInstance<MyApplicationLayerInterface>(); // refering to project/assembly "ApplicationLayerServiceInterfaces" myApplicationLayerInterface.MyApplicationLayerMethod(); The above code do not refer to the implementation projects/assemblies ApplicationLayerServiceImplementations and DomainLayerServiceImplementations, which contain this kind of code: public class MyApplicationLayerImplementation : MyApplicationLayerInterface and public class MyDomainLayerImplementation : MyDomainLayerInterface The "ServiceLocator" project/assembly might contain this code: using ApplicationLayerServiceImplementations; using ApplicationLayerServiceInterfaces; using DomainLayerServiceImplementations; using DomainLayerServiceInterfaces; using StructureMap; namespace ServiceLocator { public static class ContainerBootstrapper { public static void BootstrapStructureMap() { ObjectFactory.Initialize(x => { // The two interfaces and the two implementations below are located in four different Visual Studio projects x.ForRequestedType<MyDomainLayerInterface>().TheDefaultIsConcreteType<MyDomainLayerImplementation>(); x.ForRequestedType<MyApplicationLayerInterface>().TheDefaultIsConcreteType<MyApplicationLayerImplementation>(); }); } } } So far, no problem, but the problem occurs when I want to let the class "MyApplicationLayerImplementation" in the project/assembly "ApplicationLayerServiceImplementations" use the "ServiceLocator" project/assembly for retrieving an implementation of "MyDomainLayerInterface". When I try to do that, i.e. add a reference from "MyApplicationLayerImplementation" to "ServiceLocator", then Visual Studio complains about circular dependencies between projects. Is there any nice solution to this problem, which does not imply using refactoring-unfriendly string based xml-configuration which breaks whenever an interface or class or its namespace is renamed ? / Sven

    Read the article

  • Code bases for desktop and mobile versions of the same app

    - by Code-Guru
    I have written a small Java Swing desktop application. It seems like a natural step to port it to Android since I am interested in learning how to program for that platform. I believe that I can reuse some of my existing code base. (Of course, exactly how much reuse I can get out of it will only be determined as I start coding the Android app.) Currently I am hosting my Java Swing app on Sourceforge.net and use Git for version control. As I start creating the Android app, I am considering two options: Add the Android code to my existing repository, creating separate directories and Java packages for the Android-specific code and resources. Create a new Sourceforge project (or even host a new one) and creating a new Git repository. a. With a new repository, I can simply add the files from my original project that I will reuse. (I don't particularly like this option as it will be difficult to modify both copies of the same file in both repositories.) b. Or I can branch the original repository. This adds the difficulty of merging changes of shared source files. Mostly I am trying to decide between choices 1. and 2b. If I'm going to branch the existing repository, what advantages are there to hosting it as a separate SF project (or even using another OSS hosting service) as opposed to keeping all my source code in the current SF project?

    Read the article

  • Blackberry Apps - Importing a code-signed jar into an application project

    - by Eric Sniff
    Hi everyone, I'm working on a library project that Blackberry Java developers can import into their projects. It uses protected RIM APIs which require that it be code-signed, which I have done. But, I can't get my Jar imported and working with a simple helloWorld app. I'm using the eclipse plug-in Blackberry-JDE. Here is what I have tried: First: Building myLibProject with BlackBerry_JDE_PluginFull_1.0.0.67 into a JAR, signing it and importing it into a BlackBerry_JDE_PluginFull_1.0.0.67 application project -- I get a class not found error, while compiling the application project. Next: I imported myLibProject into an BlackBerry_JDE_PluginFull_1.1.1.* library project, built it into a jar, signed it and imported it into a BlackBerry_JDE_PluginFull_1.1.1.* application project. It built this time, but while loading up the simulator to test it I get the following error ( Access violation reading from 0xFFFFFFC ) before the simulator can loadup and it crashs the simulator. Other stuff I've tried: I also tried importing the jar into it's own project and having the HelloWorld app project reference that project. If I include the src in my application project it works fine... But Im looking for a way to deploy this as compiled code. Any Ideas? Or help?

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • How to adopt scrum agile methodology for a small .Net team

    - by Thabo
    I am working on a small product based company developing .Net applications. There is a small team with 5-6 developers. I am a person responsible for planning everything. But my primary role is Software developer. Now our current project is very unstable because of poor organization. Today my boss called me and told to submit a report about required resources, appropriate methodology, required man power and their salary scales to make the current project success. I know I don’t have enough organization skills and I need to go deep in my programming skills. So I need to focus only in the development. So I can’t manage the project anymore. Now I am searching some other ways to make ongoing development success. My questions are What is the suitable agile methodology to my team? Is Scrum is suitable for above mentioned scenario? If we adopt Scrum, what we have to do next? (I think hiring new one to manage the project is more suitable. So we have to get Scrum master and some other developers.) Are there any resources (books, Blogs and etc) to get some tips and advices to solve this problem? If Scrum is not a suitable methodology for our scenario, what else can be more suitable methodology to adopt? Can anyone give a good solution for my problem?

    Read the article

  • What can be the cause of new bugs appearing somewhere else when a known bug is solved?

    - by MainMa
    During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed, where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase, where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • Should one reject over-scoped projects?

    - by Little Child
    I spoke to my first potential client today and he told me about the requirements of his project - an Android app. He is a well-known designer / photographer in my country and now wants me to "convert the website into an app, custom-tailored". So the requirements, details stripped out, are as follows: eCommerce Aggregating all his content like videos, blogs, tweets, etc. into the app Live streaming any of his studio demos Augmented reality. So that people can see what his painting will look like on their wall before they buy it Taxi Sharing Now, for a freelance project, it seems too over-scoped. I am not saying that I cannot do it. I can. But let me be realistic: There is a steep learning curve when it comes to VR. I am not a tester. I have never white-box tested my own apps. I always black-box test. Since he is a renowned artist, something short of perfect might harm his public image So, I asked him for 2 weeks' worth of time before I give him the final answer. Now knowing whom to consult for advise, I am posting the question here. Although interesting and personally challenging, I am split-minded about accepting a project like this. I will be the only developer for this. Should one reject a project that seems to be over-scoped for one's own abilities?

    Read the article

  • Proper library for enums

    - by Bobson
    I'm trying to refactor some code such that the display is separate from the implementation, and I'm not sure where to put the existing enums. My project is currently structured as follows: Utilities RemoteData (Depends on: Utilities) LocalData (Depends on: RemoteData, Utilities) RemoteWeb (Depends on: RemoteData, Utilities) LocalWeb (Depends on: RemoteData, LocalData, Utilities) I'm now trying to add "ViewLibrary (Depends on: Utilities)" to this list, and then adding it as a new dependency to both RemoteWeb and LocalWeb. It will contain a set of interfaces which the other two projects will implement, use to populate the view, and then consume the result. There's an enum which is currently used in all the projects except Utilities. It thus lives in the RemoteData project, because everything else depends on it. But this new ViewLibrary won't depend on either data project. So how will it know about this enum? Some options I see: Create a new project just for shared enum values. Add it to Utilities, even though it is related to data. Define it a second time in ViewLibrary, and require both RemoteWeb and LocalWeb to convert the one type into the other when they access the shared views. Add a dependency on RemoteData to the ViewLibrary, even though it's supposed to be independent of data-source. Are there any better options? Is this structure flawed to begin with?

    Read the article

  • including pre-built java classes into an android project

    - by moonlightcheese
    i'm trying to include a maven java project into my android project. the maven project is the greader-unofficial project which allows developers access to google reader accounts, and handles all of the http transactions and URI/URL building, making grabbing feeds and items from google reader transparent to the developer. the project is available here: http://code.google.com/p/greader-unofficial/ the code is originally written for the standard jdk and uses classes from java.net that are not a part of the standard Android SDK. i actually tried to manually resolve all dependencies and ran into a problem when i got as far as including com.sun.syndication pieces required by the class be.lechtitseb.google.reader.api.util.AtomUtil.java... some of the classes in java.net that are in the standard jdk (i'm using 1.6) are not in the Android SDK. in addition, resolving all of these dependencies manually is just ridiculous when i'm compiling a maven project that should be pretty simple. however, i can use maven to compile the sources with no issue. how can i include this maven project, which is dependent on the complete jdk, into my android project in such a way that it will compile so that i can access the GoogleReader class from my android project? and for the record, i don't have the expertise to rewrite this entire api to work with the standard Android SDK.

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >