Search Results

Search found 57957 results on 2319 pages for 'business application'.

Page 517/2319 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • where to enlist transaction with parent child delete (repository or bll)?

    - by Caroline Showden
    My app uses a business layer which calls a repository which uses linq to sql. I have an Item class that has an enum type property and an ItemDetail property. I need to implement a delete method that: (1) always delete the Item (2) if the item.type is XYZ and the ItemDetail is not null, delete the ItemDetail as well. My question is where should this logic be housed? If I have it in my business logic which I would prefer, this involves two separate repository calls, each of which uses a separate datacontext. I would have to wrap both calls is a System.Transaction which (in sql 2005) get promoted to a distributed transaction which is not ideal. I can move it all to a single repository call and the transaction will be handled implicitly by the datacontext but feel that this is really business logic so does not belong in the repository. Thoughts? Carrie

    Read the article

  • Instantiating a context in LINQ to Entities

    - by Jagd
    I've seen two different manners that programmers approach when creating an entity context in their code. The first is like such, and you can find it all over the MSDN code examples: public void DoSomething() { using TaxableEducationEntities context = new TaxableEducationEntities()) { // business logic and whatever else } } The second is to create the context as a private attribute in some class that encapsulates your business logic. So you would have something like: public class Education_LINQ { private TaxableEducationEntities context = new TaxableEducationEntities(); public void DoSomething() { var result = from a in context.luAction select a; // business logic and whatever else } } Which way is more efficient? Assume that you have two methods, one called DoSomething1() and another called DoSomething2(), and both methods incorporate the using statement to open the context and do whatever with it. Were you to call one method after the other, would there be any superfluous overhead going on, since essentially both methods create the context and then clean it up when they're done? As opposed to having just one private attribute that is created when a class object is instantiated, and then in turn cleaned up when the object goes out of scope?

    Read the article

  • Which layer should create DataContext?

    - by Kevin
    I have a problem to decide which layer in my system should create DataContext. I have read a book, saying that if do not pass the same DataContext object for all the database updates, it will sometimes get an exception thrown from the DataContext. That's why i initially create new instance of DataContext in business layer, and pass it into data access layer. So that the same datacontext is used for all the updates. But this lead to one design problem, if i wanna change my DAL to Non-LinqToSQL in future, i need to re-write the code in business layer as well. Please give me some advice on this. Thanks. Example code 'Business Layer Public Sub SaveData(name As String) Using ts AS New TransactionScope() Using db As New MyDataContext() DAL.Insert(db,name) DAL.Insert(db,name) End Using ts.Complete() End Using End Sub 'Data Access Layer Public Sub Insert(db as MyDataContext,name As string) db.TableAInsert(name) End Sub

    Read the article

  • DDD: Client-side script to enforce invariants

    - by Mosh
    Hello, One thing that I'm confused about in regards to DDD is that our domain is supposed to handle all business logic and enforce invariants. I have noticed some people (me included) handle certain invariants in the presentation layer (i.e. WebForms, Views, etc) with javascript. This is mainly done to improve performance so the server is not hit for every request which may be invalid. Even though this approach may be beneficial performance-wise, it violates DDD principles. What if the business rules are changed? This way we don't have a rich domain where all the business rules are captured. In case of a change, we should change the domain as well as the presentation layer. Has anyone come across this situation before? I'd like to know your thoughts on this. Cheers, Mosh

    Read the article

  • PHPTAL Nested Repeat

    - by 114343500565423229598
    Hello guys, I am having an problem trying to achieve a nested repeat in PHPTAL: <tr tal:repeat="business analysis_result"> <td>${business/trading_name}</td> <tal:block tal:repeat="selected_key selected_keys"> <td>HOW??????</td> <---problem </tal:block> </tr> basically I want to have the of inner repeat to get the value of $business[$selected_key], I have looked at the phptal manual which doesn't really give you a demo on how to do this... cheers guys! James

    Read the article

  • Design - Where should objects be registered when using Windsor

    - by Fredrik Jansson
    I will have the following components in my application DataAccess DataAccess.Test Business Business.Test Application I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing. My question is who should be responsible for registering the objects into Windsor? I have a couple of ideas; Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL. Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects. The application (or test app) registers all objects of the dependencies. Can someone help me with some ideas and pros/cons with the different paths? Links to example projects utilizing Castle Windsor in this way would be very helpful.

    Read the article

  • Google Local Search API

    - by Gublooo
    hey guys couple of quick questions 1) In the local search results - we can get a lot of parameters like street title, address, city, state, lat, long , url etc - In order for me to uniquely identify this record - can I consider URL to be unique to this address or concatenation of latitude and longitude Ref: http://code.google.com/apis/ajaxsearch/documentation/reference.html#_class_GlocalResult 2) In terms of usage, depending upon what user enters, I'm displaying a list of local business for the user to choose. Now when a user selects a particular business address - is it legal for me to store that business address along with lat and longitude information in my database for future look ups. I've seen a lot of blogs talking about storing the lat/long info but just want to be sure that i'm not violating and google rules. Thanks

    Read the article

  • Entity Framework: Data Centric vs. Object Centric

    - by Eric J.
    I'm having a look at Entity Framework and everything I'm reading takes a data centric approach to explaining EF. By that I mean that the fundamental relationships of the system are first defined in the database and objects are generated that reflect those relationships. Examples Quickstart (Entity Framework) Using Entity Framework entities as business objects? The EF documentation implies that it's not necessary to start from the database layer, e.g. Developers can work with a consistent application object model that can be mapped to various storage schemas When designing a new system (simplified version), I tend to first create a class model, then generate business objects from the model, code business layer stuff that can't be generated, and then worry about persistence (or rather work with a DBA and let him worry about the most efficient persistence strategy). That object centric approach is well supported by ORM technologies such as (n)Hibernate. Is there a reasonable path to an object centric approach with EF? Will I be swimming upstream going that route? Any good starting points?

    Read the article

  • Entity framework : [Set all the entities with internal access specifier]

    - by Vedaantees
    Hi, By virtue of my application, I need to seperate my business entities from the entities created by EF4. I need to restrict the entities to only access the repository from where they are translated (using translator) to business entities shared at business and service layer. I thought of restricting them by specifying them as internal. Now there are more than 40 entities in my application so manually setting them as internal is a difficult job. In one of the forums the answers suggested using the T4 templates. But even those templates read from the entity framework access specifier. When I try to manually try to specify all the properties and class access specifiers as internal it gives me an error saying that the entity set should also be set to internal, but there is no option for the same. I am using VS 2010 and Entity Framework 4. Any suggestions???

    Read the article

  • Dealing with dependencies between WCF services when using Castle Windsor

    - by Georgia Brown
    I have several WCF services which use castle windsor to resolve their dependencies. Now I need some of these services to talk to each other. The typical structure is service -- Business Logic -- DAL The calls to the other services need to occur at Business Logic level. What is the best approach for implementing this? Should I simply inject a service proxy into the business logic? Is this wasteful if for example, only one of two method from my service need to use this proxy? What if the services need to talk to each other? - Will castle windsor get stuck in a loop trying to resolve each services dependencies?

    Read the article

  • Drupal 6: Taxonomy splot up in managed fields?

    - by Ardi Coetzee
    So you have two taxonomies, namely: "Business Type" and "Location" This is assigned to a node called BUSINESS. In effect, when the user creates a BUSINESS node, her has to choose for example, location "New York" and type "Information Services". My problem is when: a) Capturing the taxonomy, and b) Displaying the taxonomy I want the two terms to be separated from each other. I.e. I want to be able to move the two terms individual positions in the MANAGE FIELDS view, so that they can be grouped or placed seperately. Currently, Drupal only allows one entry, called "TAXONOMY" which is effectively the two terms next to each other. This is what I have: This is what I want: Bare in mind, I need to be able to use this with Hierarchical Select, which means Content Taxonomy is not an option.

    Read the article

  • Maintaining Project with Git

    - by gkrdvl
    Hi All, I have 2 project, and actually these 2 project is about 80% same each other, the mainly difference is just about language and business model, one is for larger audience using english language and have a 9$/month business model, another is using local language with freemium business model. Sometime when I want to add new feature/functionality, I want to add it in both of the project, but also sometime I want to add feature especially just for the local project. My question is, how do I maintain these 2 project with git ? Maintain 2 git repository for each project or Maintain single git repository with 2 mainly branch or Any other suggestion ?

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Where to open sessions in a Spring/Hibernate stack?

    - by CaptainAwesomePants
    I'm trying to figure out a good design for a Spring/Hibernate app. When creating such an app, it appears like there are a handful of major decisions. The first major decision seems to be where to put the session/transaction boundary. It seems like I have 3 major choices: as a filter before controllers are even invoked, immediately below the controllers at the service call level, and stuffed way below the business level in repository calls. It seems to me like the right call is the middle path, but I'm not sure. I don't want my transactions open too long, but at the same time, I don't want to constantly worry about detached objects and lazy loading in the business logic. Still, there are some downsides. For instance, it makes it hard for the business logic to make a remote call without holding up a transaction for a few seconds. I wonder if there's a better way?

    Read the article

  • creative way for implementing Data object with it's corespanding buisness logic class in java

    - by ekeren
    I have a class that need to be serialized (for both persistentcy and client-server communication) for simplicity reasons lets call the classes Business a BusinessData and I prefix for their Interfaces. All the getter and setter are delegated from Business class to BusinessData class. I thought about implementing IBusinessData interface that will contain all the getter and setters and IBusiness interface that will extend it. I can either make Business extend BuisnessData so I will not need to implement all getter and setter delegates, or make some abstract class ForwardingBusinessData that will only delegate getter and setters. Any of the above option I loose my hierarchy freedom, does any of you have any creative solution for this problem... I also reviewed DAO pattern: http://java.sun.com/blueprints/patterns/DAO.html

    Read the article

  • Yet another C# Deadlock Debugging Question

    - by Roo
    Hi All, I have a multi-threaded application build in C# using VS2010 Professional. It's quite a large application and we've experienced the classing GUI cross-threading and deadlock issues before, but in the past month we've noticed the appears to lock up when left idle for around 20-30 minutes. The application is irresponsive and although it will repaint itself when other windows are dragged in front of the application and over it, the GUI still appears to be locked... interstingly (unlike if the GUI thread is being used for a considerable amount of time) the Close, Maximise and minimise buttons are also irresponsive and when clicked the little (Not Responding...) text is not displayed in the title of the application i.e. Windows still seems to think it's running fine. If I break/pause the application using the debugger, and view the threads that are running. There are 3 threads of our managed code that are running, and a few other worker threads whom the source code cannot be displayed for. The 3 threads that run are: The main/GUI thread A thread that loops indefinitely A thread that loops indefinitely If I step into threads 2 and 3, they appear to be looping correctly. They do not share locks (even with the main GUI thread) and they are not using the GUI thread at all. When stepping into the main/GUI thread however, it's broken on Application.Run... This problem screams deadlock to me, but what I don't understand is if it's deadlock, why can't I see the line of code the main/GUI thread is hanging on? Any help will be greatly appreciated! Let me know if you need more information... Cheers, Roo -----------------------------------------------------SOLUTION-------------------------------------------------- Okay, so the problem is now solved. Thanks to everyone for their suggestions! Much appreciated! I've marked the answer that solved my initial problem of determining where on the main/UI thread the application hangs (I handn't turned off the "Enable Just My Code" option). The overall issue I was experiencing was indeed Deadlock, however. After obtaining the call-stack and popping the top half of it into Google I came across this which explains exactly what I was experiencing... http://timl.net/ This references a lovely guide to debugging the issue... http://www.aaronlerch.com/blog/2008/12/15/debugging-ui/ This identified a control I was constructing off the GUI thread. I did know this, however, and was marshalling calls correctly, but what I didn't realise was that behind the scenes this Control was subscribing to an event or set of events that are triggered when e.g. a Windows session is unlocked or the screensaver exits. These calls are always made on the main/UI thread and were blocking when it saw the call was made on the incorrect thread. Kim explains in more detail here... http://krgreenlee.blogspot.com/2007/09/onuserpreferencechanged-hang.html In the end I found an alternative solution which did not require this Control off the main/UI thread. That appears to have solved the problem and the application no longer hangs. I hope this helps anyone who's confronted by a similar problem. Thanks again to everyone on here who helped! (and indirectly, the delightful bloggers I've referenced above!) Roo -----------------------------------------------------SOLUTION II-------------------------------------------------- Aren't threading issues delightful...you think you've solved it, and a month down the line it pops back up again. I still believe the solution above resolved an issue that would cause simillar behaviour, but we encountered the problem again. As we spent a while debugging this, I thought I'd update this question with our (hopefully) final solution: The problem appears to have been a bug in the Infragistics components in the WinForms 2010.1 release (no hot fixes). We had been running from around the time the freeze issue appeared (but had also added a bunch of other stuff too). After upgrading to WinForms 2010.3, we've yet to reproduce the issue (deja vu). See my question here for a bit more information: 'http://stackoverflow.com/questions/4077822/net-4-0-and-the-dreaded-onuserpreferencechanged-hang'. Hans has given a nice summary of the general issue. I hope this adds a little to the suggestions/information surrounding the nutorious OnUserPreferenceChanged Hang (or whatever you'd like to call it). Cheers, Roo

    Read the article

  • Passing Results from SQL to Google Maps API in CodeIgniter

    - by Jason Shultz
    I'm hoping to use google maps on my site. My addresses are stored in a db. I’m pulling up a page where the information is all dynamic. For example: mysite.com/site/business/5 (where 5 is the id of the business). Let’s say I do a query like this: function addressForMap($id) { $this->db->select(‘b.id, b.busaddress, b.buscity, b.buszip’); $this->db->from(‘business as b’); $this->db->where(‘b.id, $id); } How can I output the info to the google maps api correctly so that it display’s the map appropriately? The API interface takes the results like this: $marker['address'] = 'Crescent Park, Palo Alto';

    Read the article

  • Of Models / Entities and N-tier applications

    - by Jonn
    I've only discovered a month ago the folly of directly accessing entities / models from the data access layer of an n-tier app. After reading about ViewModels while studying ASP.NET MVC, I've come to understand that to make a truly extensible application the model that the UI layer interacts with must be different from the one that the Data Access layer has access to. But what about the Business layer? Should I have a different set of models for my business layer as well? For true separation of concern, should I have a specific set of models that are relevant only to my business layer so as not to mess around with any entities (possibly generated by for example, the entity framework, or EJB) in the DAL or would that be overkill?

    Read the article

  • How do I protect Dynamic data pages using ASP.NET Authentication?

    - by ProfK
    I have a site where most of my pages are arranged in business area folders, e.g. Activations, Outdoors, Branding. Each folder has a small web.config that protects the contents against access by people without a role for that business area. However, basic admin for most business areas is done via Dynamic Data pages. These are only basically protected by not appearing in the menu unless the user has the correct role, but they are still accessible directly via URL, because of the {table}/{Action} routing used by Dynamic Data. What can I do to protect these pages against direct access?

    Read the article

  • Moving from WCF RIA RC to Release: best practices?

    - by Duncan Bayne
    I have an existing WCF RIA project built on the Release Candidate; I'm now moving to the Release version & have discovered many changes. David Scruggs made the following comment on his (MSDN) blog: "If you’ve written anything in SIlverlight 4 RIA Services, you’ll need to rewrite it. There has been a lot of refactoring and namespace moves." Having made a brief attempt to compile the old solution with the new RIA framework I'm inclined to agree. My current plan is to: remove the Silverlight Business Application projects from the Solution rebuild the EF4 items from the database create a new Silverlight Business Application project re-add the files (XAML, CS) from the old Silverlight Business Application project Does this sound like a reasonable approach? I think it's cleaner than trying to manually alter the existing project.

    Read the article

  • How can I intercept Drupal User Registration after it has passed all validations?

    - by Senthil
    Hi, I am using Drupal 6.16 In the user registration module, I want to hook in AFTER all validations have been made and the row is about to be inserted. Here, I want to run my business logic. If my business logic fails, the drupal registration should be stopped. I can do this by setting an error in the form. If it succeeds, drupal registration SHOULD proceed and complete. I decided to use the validate operation in hook_user. But it is possible for drupal registration to be stopped at the validation phase itself, by some other module that is run after mine. What I want is, when my business logic succeeds, the drupal registration MUST succeed. Which hook and operation should I use so that I can intercept just before the drupal user info insert/update and after all validations have succeeded?

    Read the article

  • Architecture Guidance for designing Workflow Foundation with WCF

    - by Matrix
    We are planning to use WF 3.5 with WCF 3.5 and Entity Framework 1.0 for the upcoming major project. I'm looking for guidance on the architecture side. This new application will be based on typical 3-tier architecture as depicted below: Presentation Tier: ASP.NET Web Forms 3.5 Business Tier: WF 3.5 + BLL's that expose the business logic through WCF service interfaces (using EF for Data Access) Data Tier: SQL Server 2000 Here are the questions: Though the Workflow Foundation has Workflow Services, where we can map the WCF service contracts to a workflow, is this the right way to design the applications? Is EF 1.0 business entities can be used in n-tier apps without sacrificing the tracking changes in the entities? Is there a sample reference application available to look? Thanks.

    Read the article

  • How should my team decide between 3-tier and 2-tier architectures?

    - by j0rd4n
    My team is discussing the future direction we take our projects. Half the team believes in a pure 3-tier architecture while the other half favors a 2-tier architecture. Project Assumptions: Enterprise business applications Business logic needed between user and database Data validation necessary Service-oriented (prefer RESTful services) Multi-year maintenance plan Support hundreds of users 3-tier Team Favors: Persistant layer <== Domain layer <== UI layer Service boundary between at least persistant layer and domain layer. Domain layer might have service boundary between it. Translations between each layer (clean DTO separation) Hand roll persistance unless we can find creative yet elegant automation 2-tier Team Favors: Entity Framework + WCF Data Service layer <== UI layer Business logic kept in WCF Data Service interceptors Minimal translation between layers - favor faster coding So that's the high-level argument. What considerations should we take into account? What experiences have you had with either approach?

    Read the article

  • Is it okay to pass injected EntityManagers to EJB bean's helper classes and use it?

    - by Zwei steinen
    We have some JavaEE5 stateless EJB bean that passes the injected EntityManager to its helpers. Is this safe? It has worked well until now, but I found out some Oracle document that states its implementation of EntityManager is thread-safe. Now I wonder whether the reason we did not have issues until now, was only because the implementation we were using happened to be thread-safe (we use Oracle). @Stateless class SomeBean { @PersistenceContext private EntityManager em; private SomeHelper helper; @PostConstruct public void init(){ helper = new SomeHelper(em); } @Override public void business(){ helper.doSomethingWithEm(); } } Actually it makes sense.. If EntityManager is thread-unsafe, a container would have to do inercept business() this.em = newEntityManager(); business(); which will not propagate to its helper classes. If so, what is the best practice in this kind of a situation? Passing EntityManagerFactory instead of EntityManager?

    Read the article

  • Creating a file upload template in Doctrine ORM

    - by balupton
    Hey all. I'm using Doctrine 1.2 as my ORM for a Zend Framework Project. I have defined the following Model for a File. File: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true Now here is the File Model php class (just skim through for now): <?php /** * File * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6365 2009-09-15 18:22:38Z jwage $ */ class File extends BaseFile { public function setUp ( ) { $this->hasMutator('file', 'setFile'); parent::setUp(); } public function setFile ( $file ) { global $Application; // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // Check the file if ( !empty($file['error']) ) { $error = $file['error']; switch ( $file['error'] ) { case UPLOAD_ERR_INI_SIZE : $error = 'ini_size'; break; case UPLOAD_ERR_FORM_SIZE : $error = 'form_size'; break; case UPLOAD_ERR_PARTIAL : $error = 'partial'; break; case UPLOAD_ERR_NO_FILE : $error = 'no_file'; break; case UPLOAD_ERR_NO_TMP_DIR : $error = 'no_tmp_dir'; break; case UPLOAD_ERR_CANT_WRITE : $error = 'cant_write'; break; default : $error = 'unknown'; break; } throw new Doctrine_Exception('error-application-file-' . $error); return false; } if ( empty($file['tmp_name']) || !is_uploaded_file($file['tmp_name']) ) { throw new Doctrine_Exception('error-application-file-invalid'); return false; } // Prepare config $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; // Prepare file $filename = $file['name']; $file_old_path = $file['tmp_name']; $file_new_path = $file_upload_path . $filename; $exist_attempt = 0; while ( file_exists($file_new_path) ) { // File already exists // Pump exist attempts ++$exist_attempt; // Add the attempt to the end of the file $file_new_path = $file_upload_path . get_filename($filename,false) . $exist_attempt . get_extension($filename); } // Move file $success = move_uploaded_file($file_old_path, $file_new_path); if ( !$success ) { throw new Doctrine_Exception('Unable to upload the file.'); return false; } // Secure $file_path = realpath($file_new_path); $file_size = filesize($file_path); $file_mimetype = get_mime_type($file_path); $file_type = get_filetype($file_path); // Apply $this->path = $file_path; $this->size = $file_size; $this->mimetype = $file_mimetype; $this->type = $file_type; // Apply: Image if ( $file_type === 'image' ) { $image_dimensions = image_dimensions($file_path); if ( !empty($image_dimensions) ) { // It is not a image we can modify $this->width = 0; $this->height = 0; } else { $this->width = $image_dimensions['width']; $this->height = $image_dimensions['height']; } } // Done return true; } /** * Download the File * @return */ public function download ( ) { global $Application; // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Output result and download become_file_download($file_path, null, null); die(); } public function postDelete ( $Event ) { global $Application; // Prepare $Invoker = $Event->getInvoker(); // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Delete the file unlink($file_path); // Done return true; } } What I am hoping to accomplish is so that the above custom functionality within my model file can be turned into a validator, template, or something along the lines. So hopefully I can do something like: File: actAs: BalFile: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true I'm hoping for a validator so that say if I do $File->setFile($_FILE['uploaded_file']); It will provide a validation error, except in all the doctrine documentation it has little on custom validators, especially in the contect of "virtual" fields. So in summary, my question is: How earth can I go about making a template/extension to porting this functionality? I have tried before with templates but always gave up after a day :/ If you could take the time to port the above I would greatly appreciate it.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >