Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 113/412 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • Visual Studio 2010 UML Tools. How do they integrate with the code? For small/mid size products is t

    - by punkouter
    I just got a book that goes over all the VS2010 tools. Most I have never used like load testing/web testing, UML Tools, Layer Diagrams.... Has anyone had any real world experience with using these VS2010 tools like the UML diagramming? I am wondering if it is something that would really be useful starting a new project or is it just busy work that no one ever uses once they are made? How are the UML diagrams integrated with the rest of development in VS2010 ? The last project we just made some really basic Visio diagrams but maybe this is better. Alot of VS2010 Ultimate tools are over kill (layer diagram) for small/mid level projects it seems. UML seems to be one of those things I hear about past 10 years but never seem to use.

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Avoiding mass propagation of properties and events for exposure to ViewModels.

    - by firoso
    I have an MVVM application I am developing that is to the point where I'm ready to start putting together a user interface (my client code is largely functional) I'm now running into the issue that I'm trying to get my application data to where I need it so that it can be consumed by the view model and then bound to the view. Unfortunately, it seems that I've either got a few structural oversights, or I'm just going to have to face the reality that I need to be propogating events and raising excessive amounts of errors to notify view models that thier properties have changed. Let me go into some examples of my issue: I have a class "Unit" contained in a class "Test", contained in a class "Session" contained in a class "TestManager" which is contained in "TestDataModel" which is utilized by "TestViewModel" which is databound to by my "TestView" .... WHOA. Now, consider that Unit (the bottom of the heiarchy) has a property called "Results" that is updated periodically, I want to expose that to my viewmodel and then databind it to my view, trouble is, the only way I can really think to do this is to perpetuate events WAY up a chain that say "I've been updated!" and then request the new value... This seems like an aweful way to do this. Alternatively, I could register a static event and raise it, and have the appropriate "Unit view model" grab the event and request the update. This SEEMS better... but... static events? Is that a taboo idea? Also, having an expression like: TestDataModel.TestManager.Session.Test.Unit.Results[i] Seems REALLY gross to have on a View Model. I know this all reeks of a bad design issue, but I can't figure out what I did wrong? Should I be using more singleton/container controlled lifetimes type objects? Register object instances with static helper containers? Obviously these are hard questions to answer without being intimate with the existing structure, but if you've run into situations like this, what did you do to refactor? Should I just live with this, add mass events, and propogate them?

    Read the article

  • How practical to change MVC app from traditional authentication to cookieless?

    - by Phil.Wheeler
    I have an application written in MVC that uses your regular .Net Forms Authentication. There's nothing particularly new or exciting going on with it. My client has now asked that users be able to log in to the app on the same machine but in different browsers, or different tabs within the same browser. To my mind, he's asking for a scope change to have authentication moved to cookieless instead of its current design. Not having had any experience with doing this in MVC, I'm curious to know before I get started how much hurt I'm in for by trying this. Are there better ways to do it? What should I consider? Any advice appreciated.

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • What Is Utility Services ?

    - by query_bug
    Hai, I need some information about Utility Services Layer. Can someone Please help me in getting information on that as I am supposed to give a presentation on Utility Services Layer. Thanks in advance...

    Read the article

  • JSF Servlet Arch Help needed.

    - by abc
    i want a mechanism in my web app as described below: user will enter mydomain.com/CompanyName , depending upon the CompanyNameit will show its logo and its customized page, and i will take that parsed parameter in session again upon each request i will compare the parsed CompanyName and one stored in session , and if they matched then application will show the requested page with user's data.else it will be redirected to login page. and the main thing is i want this thing in JSF arch. i tried taking a servlet that will resolve all request and it will parse and then will dispatch the request to prefered servlet,but problem is it goes in loop as again it resolves to the same controller servlet,

    Read the article

  • Innovative seminar topic

    - by sijith
    Hi, I don't know whether its the right place to ask this question. I want some good seminar topic which suits for a cooperate culture company. I need topic either in management side or technical side which is innovative and interesting which gives much interest to employees

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • designing multi module J2EE application

    - by user728947
    Might be my question is abstract or out of context, but i am asking here since i have little idea how this happens. I am wondering how big application/ platform break down there application in to multiple module and how they able to manage modules dependencies. For example in some E-commerce application they tend to break down it in various modules like pricing,promotions,shipping.import/export and many more. when we develop those application we hardly think about the underlying modules and how they have been designed to provides functionalists. Most of those module are not web-applications but are standalone module and not deployed in the web-app as jar files. can any one help me to understand how they break up things or is there any standard way to do this.any help/resources to get insight will really be helpful

    Read the article

  • Good workflow for website design

    - by Olav
    I would like some idea about a good workflow for Website Design, with a high degree of "offshoring" (Elance, Odesk etc.). I would to do as much as possible "pre production", with client input, ideas etc. stored in IA diagrams, wireframe mockups etc. in something like a Wiki. Also a like the idea about having different people come up with different design proposals. Wouldlike to have some ideas of costs of different phases and tasks ($, %, hours). With Design I mean roughly the aspects of a site that can be done with client-side tools, especially XHTML and CSS. What other tools should I use than IA diagrams.

    Read the article

  • SQLite not pluralizing Table Names

    - by weenet
    I am using SQLite as my db during development, and I want to postpone actually creating a final database until my domains are fully mapped. So I have this in my Global.asax.cs file: private void InitializeNHibernateSession() { Configuration cfg = NHibernateSession.Init( webSessionStorage, new [] { Server.MapPath("~/bin/MyNamespace.Data.dll") }, new AutoPersistenceModelGenerator().Generate(), Server.MapPath("~/NHibernate.config")); if (ConfigurationManager.AppSettings["DbGen"] == "true") { var export = new SchemaExport(cfg); export.Execute(true, true, false, NHibernateSession.Current.Connection, File.CreateText(@"DDL.sql")); } } The AutoPersistenceModelGenerator hooks up the various conventions, including a TableNameConvention like so: public void Apply(FluentNHibernate.Conventions.Instances.IClassInstance instance) { instance.Table(Inflector.Net.Inflector.Pluralize(instance.EntityType.Name)); } This is working nicely execpt that the sqlite db generated does not have pluralized table names. Any idea what I'm missing? Thanks.

    Read the article

  • Zend Framework Application: module dependencies

    - by takeshin
    How do you handle dependencies between modules in Zend Framework to create reusable, drop-in modules? E.g. I have Newsletter module, which allows users to subscribe, providing e-mail address. Next, I plan to add Blog module, which allows to subscribe to posts by e-mail (it obviously duplicates some functionality of the newsletter, but the e-mails addresses are stored in User model). Next one is the Forum module, with the same subscribe to post functionality. But I want to have ability to use these modules independent to each one, i.e. application with newsletter alone, newsletter with blog, comibnation two or three modules at once. This is pretty common, e.g. the same story with search feature. I want to have search module, with options to search in all data, blog data or forum data if available. Is there any design pattern for this? Do I have to add some moduleExists($moduleMame), or provide some interface or abstract classes, some base controller pattern, similar for each module?

    Read the article

  • Using AMQP to collect events

    - by synapse
    Does AMQP has any advantages over an ad-hoc implementation for a simple stats gathering scenario? It works like this - clients send events (more than we care to put into persistent storage) to (several) web workers, the workers aggrregate them and write to a single database. I don't think I should consider using AMQP for this because I'll still need web workers to receive events from clients through HTTP and to publish them. Am I missing something?

    Read the article

  • Java portal architectural considerations

    - by Woot4Moo
    Currently there exists a need to create an application that will serve 5 different customers each requiring their own specific URL and content repository. My question is when designing this application what should my considerations be for protecting the content of the individual customers while meeting the requirements of the unique URL. The system will be sitting on Windows with a postgres database and java as the implementation language.

    Read the article

  • testing the controller in asp.net mvc

    - by csetzkorn
    Hi, I would like to test the validation of submitted DTO. This is the bare bone of a controller create action: [AcceptVerbs(HttpVerbs.Post)] public RedirectToRouteResult Create(SomeDTO SomeDTO) { SomeObject SomeObject = null; try { SomeObject = this.RepositoryService.getSomeObjectRepository().Create(SomeDTO, this.RepositoryService); } catch (BrokenRulesException ex) { ex.AddModelStateErrors(ModelState, "Model"); } catch (Exception e) { ModelState.AddModelError("Exception", e.Message); } TempData["ViewData"] = ViewData; TempData["SomeDTO "] = SomeDTO; return ModelState.IsValid ? RedirectToAction("SomeObjectDetail", new { Id = SomeObject.Id }) : RedirectToAction("Form"); } The mechanics , although not relevant, is as follows: I have a strongly typed view = form which submits a dto to this action which either returns the form or the details page of the created object. I would like to unit test whether the Model contains certain key/errorMessage combinations given some invalid dto. Did someone do similar stuff? Any pointers would be very much appreciated. Thanks. Best wishes, Christian

    Read the article

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • The one true domain model - a fallacy?

    - by James L
    I've just finished working for a client which has burnt millions on a project to come up with the 'one true domain model' for their business. This was the project's sole deliverable. What are your thoughts on this? Is a single version of the truth realistic?

    Read the article

  • MS SQL Server Job with precise timing

    - by TcKs
    Hi, I have a DB with game data (map, players, etc...) and I have a game core mechanics writen in T-SQL stored procedure. I need process game loop (via the stored procedure) every "X" seconds. I tried used the SQL Job, but when I set the interval to seconds, the SQL server stops responding. If I set the interval greater than one minute, all was ok. I need game loop precise in time, e.g. the game loop will run only once and will be executed every "X" precisely (tolerance should be less than one second). Can I do it with MS SQL Server capabilities? Or should I create a windows service which will repeatly execute game loop procedure? Or should I go another way? Thanks! EDIT: The game loop stored procedure takes less than the interval.

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >