Search Results

Search found 5562 results on 223 pages for 'poco libraries'.

Page 118/223 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • include files in a method?

    - by fayer
    i want to have a class that includes all files for me. eg. Loader::loadZend // loads all zend libraries Loader::loadSymfony // loads all symfony components if i include a file in a method, does this become globally available? it seems that it doesnt work. maybe i have done something wrong, or is there a workaround for this? thanks

    Read the article

  • Windows Server 2003 provide network mutexes

    - by arpal
    Hi! I want to coordinate use of common files on Windows Server 2003 from two Windows XP Workstations. Does Windows Server 2003 provide network mutexes for this purpose? Are there any libraries of C functions to access to them? I couldn’t find such functions in Visual C++ 2008.

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Creando un File Upload

    - by jaullo
    Para iniciar hablaremos un poco sobre el control File Upload, de esta forma daremos una idea general de que es y como trabaja. El File Upload es un control de asp.net que permite que los usuarios seleccionen un archivo de cualquier ubicación en el equipo y lo suban a un directorio predeterminado a traves de una página asp.net. En principio este control esta limitado para no permitir subir archivos de mas de 4 MB. Sin embargo, desde el webconfig de nuestra aplicacón podremos cambiar ese valor, ya sea para aumentarlo o bien para disminuirlo. Nuestro ejemplo, se enfocará en crear un webcontrol que permita seleccionar un archivo y guardarlo, asi que empecemos. Lo primero será agregar a nuestra página un webcontrol que llamaremos Upload.ascx Posteriormente en nuestro webcontrol, agregamos el siguiente código: <table style="width: 100%">         <tr>             <td colspan="3">             <div align="center">                  <asp:Label ID="Label1" runat="server" Text="File Upload"></asp:Label>              </div>             </td>                    </tr>         <tr>             <td style="width: 456px" rowspan="2">                                                             &nbsp;</td>             <td style="width: 386px">                                <div align="center">                         <asp:FileUpload ID="FileUpload1" runat="server" Height="24px" Width="243px" />                         <span id="Span1" runat="server" />                            </div>                      </td>             <td rowspan="2">                                                             </td>         </tr>         <tr>             <td style="width: 386px">                 <div align="center">                      <asp:ImageButton Id="btnupload" runat="server" OnClick="btnupload_Click"                     ImageUrl="~/Styles/img/upload.png" style="text-align: center" />           </div>                  </td>         </tr>         <tr>             <td colspan="3">                 &nbsp;</td>         </tr>     </table>  De esta forma nuestro control deberá verse algo así   Por último en el code behin de nuestro control agregamos el código a nuestro boton, el cual será el encargado de leer el archivo que se encuentra en el File Upload y guardarlo en la ruta especificada.  Protected Sub btnupload_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles btnupload.Click         If FileUpload1.HasFile Then             Dim fileExt As String             fileExt = System.IO.Path.GetExtension(FileUpload1.FileName)             If (fileExt = ".exe") Then                 Label1.Text = "You can´t upload .exe file!"             Else                 Try                     FileUpload1.SaveAs(decrpath & _                        FileUpload1.FileName)                     Label1.Text = "File name: " & _                       FileUpload1.PostedFile.FileName & "<br>" & _                       "File Size: " & _                       FileUpload1.PostedFile.ContentLength & " kb<br>" & _                       "Content type: " & _                       FileUpload1.PostedFile.ContentType                 Catch ex As Exception                     Label1.Text = "ERROR: " & ex.Message.ToString()                 End Try             End If         Else             Label1.Text = "You have not specified a file!"         End If            End Sub   Como vemos en el código anterior tambien hemos agregado otros elementos los cuales nos dirán el nombre del archivo, el tipo de contenido y el tamaño en kb una vez que el archivo ha sido súbido al servidor. Por último deben tomar en cuenta que decrpath es la ruta en donde será subido el archivo, la cual deben variar a su gusto.

    Read the article

  • La búsqueda de la eficiencia como Santo Grial de las TIC sanitarias

    - by Eloy M. Rodríguez
    Las XVIII Jornadas de Informática Sanitaria en Andalucía se han cerrado el pasado viernes con 11.500 horas de inteligencia colectiva. Aunque el cálculo supongo que resulta de multiplicar las horas de sesiones y talleres por el número de inscritos, lo que no sería del todo real ya que la asistencia media calculo que andaría por las noventa personas, supongo que refleja el global si incluimos el montante de interacciones informales que el formato y lugar de celebración favorecen. Mi resumen subjetivo es que todos somos conscientes de que debemos conseguir más eficiencia en y gracias a las TIC y que para ello hemos señalado algunas pautas, que los asistentes, en sus diferentes roles debiéramos aplicar y ayudar a difundir. En esa línea creo que destaca la necesidad de tener muy claro de dónde se parte y qué se quiere conseguir, para lo que es imprescindible medir y que las medidas ayuden a retroalimentar al sistema en orden de conseguir sus objetivos. Y en este sentido, a nivel anecdótico, quisiera dejar una paradoja que se presentó sobre la eficiencia: partiendo de que el coste/día de hospitalización es mayor al principio que los últimos días de la estancia, si se consigue ser más eficiente y reducir la estancia media, se liberarán últimos días de estancia que se utilizarán para nuevos ingresos, lo que hará que el número de primeros días de estancia aumente el coste económico total. En este caso mejoraríamos el servicio a los ciudadanos pero aumentaríamos el coste, salvo que se tomasen acciones para redimensionar la oferta hospitalaria bajando el coste y sin mejorer la calidad. También fue tema destacado la posibilidad/necesidad de aprovechar las capacidades de las TIC para realizar cambios estructurales y hacer que la medicina pase de ser reactiva a proactiva mediante alarmas que facilitasen que se actuase antes de ocurra el problema grave. Otro tema que se trató fue la necesidad real de corresponsabilizar de verdad al ciudadano, gracias a las enormes posibilidades a bajo coste que ofrecen las TIC, asumiendo un proceso hacia la salud colaborativa que tiene muchos retos por delante pero también muchas más oportunidades. Y la carpeta del ciudadano, emergente en varios proyectos e ideas, es un paso en ese aspecto. Un tema que levantó pasiones fue cuando la Directora Gerente del Sergas se quejó de que los proyectos TIC eran lentísimos. Desgraciadamente su agenda no le permitió quedarse al debate que fue bastante intenso en el que salieron temas como el larguísimo proceso administrativo, las especificaciones cambiantes, los diseños a medida, etc como factores más allá de la eficiencia especifica de los profesionales TIC involucrados en los proyectos. Y por último quiero citar un tema muy interesante en línea con lo hablado en las jornadas sobre la necesidad de medir: el Índice SEIS. La idea es definir una serie de criterios agrupados en grandes líneas y con un desglose fino que monitorice la aportación de las TIC en la mejora de la salud y la sanidad. Nos presentaron unas versiones previas con debate aún abierto entre dos grandes enfoques, partiendo desde los grandes objetivos hasta los procesos o partiendo desde los procesos hasta los objetivos. La discusión no es sólo académica, ya que influye en los parámetros a establecer. La buena noticia es que está bastante avanzado el trabajo y que pronto los servicios de salud podrán tener una herramienta de comparación basada en la realidad nacional. Para los interesados, varios asistentes hemos ido tuiteando las jornadas, por lo que el que quiera conocer un poco más detalles puede ir a Twitter y buscar la etiqueta #jisa18 y empezando del más antiguo al más moderno se puede hacer un seguimiento con puntos de vista subjetivos sobre lo allí ocurrido. No puedo dejar de hacer un par de autocríticas, ya que soy miembro de la SEIS. La primera es sobre el portal de la SEIS que no ha tenido la interactividad que unas jornadas como estas necesitaban. Pronto empezará a tener documentos y análisis de lo allí ocurrido y luego vendrán las crónicas y análisis más cocinados en la revista I+S. Pero en la segunda década del siglo XXI se necesita bastante más. La otra es sobre la no deseada poca presencia de usuarios de las TIC sanitarias en los roles de profesionales sanitarios y ciudadanos usuarios de los sistemas de información sanitarios. Tenemos que ser proactivos para que acudan en número significativo, ya que si no estamos en riesgo de ser unos TIC-sanitarios absolutistas: todo para los usuarios pero sin los usuarios. Tweet

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • Using ViewModel in ASP.NET MVC with FluentValidation

    - by Brian McCord
    I am using ASP.NET MVC with Entity Framework POCO classes and the FluentValidation framework. It is working well, and the validation is happening as it should (as if I were using DataAnnotations). I have even gotten client-side validation working. And I'm pretty pleased with it. Since this is a test application I am writing just to see if I can get new technologies working together (and learn them along the way), I am now ready to experiment with using ViewModels instead of just passing the actual Model to the view. I'm planning on using something like AutoMapper in my service to do the mapping back and forth from Model to ViewModel but I have a question first. How is this going to affect my validation? Should my validation classes (written using FluentValidation) be written against the ViewModel instead of the Model? Or does it need to happen in both places? One of the big deals about DataAnnotations (and FluentValidation) was that you could have validation in one place that would work "everywhere". And it fulfills that promise (mostly), but if I start using ViewModels, don't I lose that ability and have to go back to putting validation in two places? Or am I just thinking about it wrong?

    Read the article

  • ASP.Net Entity Framework, objectcontext error

    - by Chris Klepeis
    I'm building a 4 layered ASP.Net web application. The layers are: Data Layer Entity Layer Business Layer UI Layer The entity layer has my data model classes and is built from my entity data model (edmx file) in the datalayer using T4 templates (POCO). The entity layer is referenced in all other layers. My data layer has a class called SourceKeyRepository which has a function like so: public IEnumerable<SourceKey> Get(SourceKey sk) { using (dmc = new DataModelContainer()) { var query = from SourceKey in dmc.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query; } } Lazy loading is disabled since I do not want my queries to run in other layers of this application. I'm receiving the following error when attempting to access the information in the UI layer: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection. I'm sure this is because my DataModelContainer "dmc" was disposed. How can I return this IEnumerable object from my data layer so that it does not rely on the ObjectContext, but solely on the DataModel? Is there a way to limit lazy loading to only occur in the data layer?

    Read the article

  • EF4 querying from parent to grandchildren

    - by Hans Kesting
    I have a model withs Parents, Children and Grandchildren, in a many-to-many relationship. Using this article I created POCO classes that work fine, except for one thing I can't yet figure out. When I query the Parents or Children directly using LINQ, the SQL reflects the LINQ query (a .Count() executes a COUNT in the database and so on) - fine. The Parent class has a Children property, to access it's children. But (and now for the problem) this doesn't expose an IQueryable interface but an ICollection. So when I access the Children property on a particular parent all the Parent's Children are read. Even worse, when I access the Grandchildren (theParent.Children.SelectMany(child => child.GrandChildren).Count()) then for each and every child a separate request is issued to select all data of it's grandchildren. That's a lot of separate queries! Changing the type of the Children property from ICollection to IQueryable doesn't solve this. Apart from missing methods I need, like Add() and Remove(), EF just doesn't recognize the navigation property then. Are there correct ways (as in: low database interaction) of querying through children (and what are they)? Or is this just not possible?

    Read the article

  • mvvm - prismv2 - INotifyPropertyChanged

    - by depictureboy
    Since this is so long and prolapsed and really doesnt ask a coherent question: 1: what is the proper way to implement subproperties of a primary object in a viewmodel? 2: Has anyone found a way to fix the delegatecommand.RaiseCanExecuteChanged issue? or do I need to fix it myself until MS does? For the rest of the story...continue on. In my viewmodel i have a doctor object property that is tied to my Model.Doctor, which is an EF POCO object. I have onPropertyChanged("Doctor") in the setter as such: Private Property Doctor() As Model.Doctor Get Return _objDoctor End Get Set(ByVal Value As Model.Doctor) _objDoctor = Value OnPropertyChanged("Doctor") End Set End Property The only time OnPropertyChanged fires if the WHOLE object changes. This wouldnt be a problem except that I need to know when the properties of doctor changes, so that I can enable other controls on my form(save button for example). I have tried to implement it in this way: Public Property FirstName() As String Get Return _objDoctor.FirstName End Get Set(ByVal Value As String) _objDoctor.FirstName = Value OnPropertyChanged("Doctor") End Set End Property this is taken from the XAMLPowerToys controls from Karl Shifflet, so i have to assume that its correct. But for the life of me I cant get it to work. I have included PRISM in here because I am using a unity container to instantiate my view and it IS a singleton. I am getting change notification to the viewmodel via eventaggregator that then populates Doctor with the new value. The reason I am doing all this is because of PRISM's DelegateCommand. So maybe that is my real issue. It appears that there is a bug in DelegateCommand that does not fire the RaiseCanExecuteChanged method on the commands that implement it and therefore needs to be fired manually. I have the code for that in my onPropertyChangedEventHandler. Of course this isnt implemented through the ICommand interface either so I have to break and make my properties DelegateCommand(of X) so that I have access to RaiseCanExecuteChanged of each command.

    Read the article

  • Unit Testing - Validation of ViewModel ASP.NET MVC 2

    - by dean nolan
    I am currently unit testing a service that adds users to a repository. I am using dependency injection to test using a fake repository. The repository has a method CreateUser(User user) which just adds it to the database or in this case a List of Users. The logic for the creation is in the UserServices class. The application has a form for creating a user that requires some properties such as name and address. This is an MVC 2 app and I will be using the new validation using data annotations. This makes me wonder about a few things: 1) Should I annotate a POCO object that will map to the database? Or should I create a specific View Model that has these annotations and pass this data to the UserServices class? 2)Should the UserServicesClass also check this data? Would I best be constructing a Usr out of the ViewModel and passing this into the Service as a parameter? 3) The actual unit testing would depend on 2), I either populate a User object and pass that in, or I pass a large list of strings to the method CreateUser. Writing this out I get a basic idea that I should probably annotate the view model only, pass in a user (constructed by the view model if the data is valid) and also just construct the user in the unit test also. Is this the best way to go?

    Read the article

  • LINQ to XML via C#

    - by user70192
    Hello, I'm new to LINQ. I understand it's purpose. But I can't quite figure it out. I have an XML set that looks like the following: <Results> <Result> <ID>1</ID> <Name>John Smith</Name> <EmailAddress>[email protected]</EmailAddress> </Result> <Result> <ID>2</ID> <Name>Bill Young</Name> <EmailAddress>[email protected]</EmailAddress> </Result> </Results> I have loaded this XML into an XDocument as such: string xmlText = GetXML(); XDocument xml = XDocument.Parse(xmlText); Now, I'm trying to get the results into POCO format. In an effort to do this, I'm currently using: var objects = from results in xml.Descendants("Results") select new Results // I'm stuck How do I get a collection of Result elements via LINQ? I'm particularly confused about navigating the XML structure at this point in my code. Thank you!

    Read the article

  • Debugging strategy to find the cause of bad_alloc

    - by SalamiArmi
    I have a fairly serious bug in my program - occasional calls to new() throw a bad_alloc. From the documentation I can find on bad_alloc, it seems to be thrown for these reasons: When the computer runs out of memory (which definitely isn't happening, I have 4GB of RAM, program throws bad_alloc when using less than 5MB (checked in taskmanager) with nothing serious running in the background). If the memory becomes too fragmented to allocate new blocks (which, again, is unlikely - the largest sized block I ever allocate would be about 1KB, and that doesn't get done more than 100 times before the crash occurs). Based on these descriptions, I don't really have anywhere in which a bad_alloc could be thrown. However, the application I am running runs more than one thread, which could possibly be contributing to the problem. By testing all of the objects on a single thread, everything seems to be working smoothly. The only other thing that I can think of that is going on here could be some kind of race-condition caused by calling new() in more than one place at the same time, but I've tried adding mutexes to prevent that behaviour to no effect. Because the program is several hundred lines and I have no idea where the problem actually lies, I'm not sure of what, if any, code snippets to post. Instead, I was wondering if there were any tools that will help me test for this kind of thing, or if there are any general strategies that can help me with this problem. I'm using Microsoft Visual Studio 2008, with Poco for threading.

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • Synchronizing a collection of wrapped objects with a collection of unwrapped objects

    - by Kenneth Cochran
    I have two classes: Employee and EmployeeGridViewAdapter. Employee is composed of several complex types. EmployeeGridViewAdapter wraps a single Employee and exposes its members as a flattened set of system types so a DataGridView can handle displaying, editing, etc. I'm using VS's builtin support for turning a POCO into a data source, which I then attach to a BindingSource object. When I attach the DataGridView to the BindingSource it creates the expected columns and at runtime I can perform the expected CRUD operations. All is good so far. The problem is the collection of adapters and the collection of employees aren't being synchronized. So all the employees I create an runtime never get persisted. Here's a snippet of the code that generates the collection of EmployeeGridViewAdapter's: var employeeCollection = new List<EmployeeGridViewAdapter>(); foreach (var employee in this.employees) { employeeCollection.Add(new EmployeeGridViewAdapter(employee)); } this.view.Employees = employeeCollection; Pretty straight forward but I can't figure out how to synchronize changes back to the original collection. I imagine edits are already handled because both collections reference the same objects but creating new employees and deleting employees aren't happening so I can't be sure.

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

  • Help me understand entity framework 4 caching for lazy loading

    - by Chris
    I am getting some unexpected behaviour with entity framework 4.0 and I am hoping someone can help me understand this. I am using the northwind database for the purposes of this question. I am also using the default code generator (not poco or self tracking). I am expecting that anytime I query the context for the framework to only make a round trip if I have not already fetched those objects. I do get this behaviour if I turn off lazy loading. Currently in my application I am breifly turning on lazy loading and then turning it back off so I can get the desired behaviour. That pretty much sucks, so please help. Here is a good code example that can demonstrate my problem. Public Sub ManyRoundTrips() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() 'makes unnessesary round trip to the database, I just loaded the employees' MessageBox.Show(context.Employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) context.Orders.Execute(System.Data.Objects.MergeOption.AppendOnly) For Each emp As Employee In employees 'makes unnessesary trip to database every time despite orders being pre loaded.' Dim i As Integer = emp.Orders.Count Next End Sub Public Sub OneRoundTrip() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Include("Orders").Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() MessageBox.Show(employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) For Each emp As Employee In employees Dim i As Integer = emp.Orders.Count Next End Sub Why is the first block of code making unnessesary round trips?

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • MVVM pattern: ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >