Search Results

Search found 9975 results on 399 pages for 'enterprise architecture'.

Page 365/399 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • UML diagrams that are actually pretty?

    - by Borek
    I'm looking for a diagramming software that would produce good looking output. It doesn't need to support everything (or even much) from UML, is doesn't need to have code engineering functions or anything, it just needs to produce visually interesting output. Here is a couple of samples of products that I consider ugly / not good enough: Visio with default UML stencils (didn't find better looking ones), Enterprise Architect, Dia, ArgoUML and many other "professional" UML tools. A couple of visually compelling tools that I considered (but found issues with): Visual Studio class diagrams - just for .NET classes but the output is miles better than what UML tools typically produce NClass - similar to VS's class diagrams but I could not find the "pretty", blue skin anywhere yuml.me - very nice but lacking some advanced layout options. I have to say that I find their style almost ideal for high-level diagrams - they look sketchy which is good. Balsamiq - I think Joel used this for hginit.com and I liked it. However, it's not suited for creating software diagrams so I can imagine it would be quite a lot of work MS Word has actually quite a good graphics engine but I'd rather leave this as a choice of the last resort I'd be grateful for any good tips.

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • Specific Shopping Cart Recommendations

    - by Dean J
    I'm trying to suggest a solution for a friend who owns an existing web shop. The current solution isn't cutting it. The new solution needs to have a few things that look like they're enterprise-only if I go with Magento, and $12k a year for a store with maybe $20k in stock just doesn't work. The site should have items, which have one or more categories. Each category may have a parent category. Items have MSRP, and a discount rate by supplier, brand, and sometimes additional discount by product. When a user buys something, it should automatically setup a shipping label with UPS or USPS, depending on user's choice, and build two invoices; one to go in the box, one to go into records. This is crucial; it's low profit per item, so it needs to minimize labor here. Need to be able to have sales (limited by time), discount codes/coupon codes. Ideally would have private sales and/or members-only rates as well. It needs a payment gateway; Paypal/GCheckout-only isn't going to fly. Must be able to accept Visa/MC. Suggestions? I'm debating just building this myself in Java or PHP, but wanted to point my friend to a reasonable-cost solution that already exists if I can. This all seems pretty straightforward to code, save working with the UPS/USPS/Visa/MC APIs, and doing CSS for it.

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • I am confused about how to use @SessionAttributes

    - by yusaku
    I am trying to understand architecture of Spring MVC. However, I am completely confused by behavior of @SessionAttributes. Please look at SampleController below , it is handling post method by SuperForm class. In fact, just field of SuperForm class is only binding as I expected. However, After I put @SessionAttributes in Controller, handling method is binding as SubAForm. Can anybody explain me what happened in this binding. ------------------------------------------------------- @Controller @SessionAttributes("form") @RequestMapping(value = "/sample") public class SampleController { @RequestMapping(method = RequestMethod.GET) public String getCreateForm(Model model) { model.addAttribute("form", new SubAForm()); return "sample/input"; } @RequestMapping(method = RequestMethod.POST) public String register(@ModelAttribute("form") SuperForm form, Model model) { return "sample/input"; } } ------------------------------------------------------- public class SuperForm { private Long superId; public Long getSuperId() { return superId; } public void setSuperId(Long superId) { this.superId = superId; } } ------------------------------------------------------- public class SubAForm extends SuperForm { private Long subAId; public Long getSubAId() { return subAId; } public void setSubAId(Long subAId) { this.subAId = subAId; } } ------------------------------------------------------- <form:form modelAttribute="form" method="post"> <fieldset> <legend>SUPER FIELD</legend> <p> SUPER ID:<form:input path="superId" /> </p> </fieldset> <fieldset> <legend>SUB A FIELD</legend> <p> SUB A ID:<form:input path="subAId" /> </p> </fieldset> <p> <input type="submit" value="register" /> </p> </form:form>

    Read the article

  • Suggestions on writing a TCP IP messaging system (Client/Server) using Delphi 2010

    - by Shane
    I would like to write a messaging system using TCP IP in Delphi 2010. I would like to hear what my best options are for using the standard delphi 2010 components/indy components for doing this. I would like to write a server which does the listening and forwarding of messages to all machines on the network running a client. 1.) a.) clients can send a message to server to be forwarded to all other clients b.) clients listen for messages from other senders (via server) and displays messages. 2.) a.) Server can send a message to all clients b.) Server forwards any messages from clients to all other clients thanks for any suggestions NOTE: I am not writing a instant messaging or chat program. This is merely a system where users can send alerts/messages to other users - they can not reply to each other! NO commercial, shareware, etc links - please! I would like to hear about how you would go about writing this type of system and what approachs you would take, and possibly the TCP IP messaging architecture you would use. Whether it be straight Winows API, Indy components, etc, etc.

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • How to pass common arguments to Perl modules

    - by Leonard
    I'm not thrilled with the argument-passing architecture I'm evolving for the (many) Perl scripts that have been developed for some scripts that call various Hadoop MapReduce jobs. There are currently 8 scripts (of the form run_something.pl) that are run from cron. (And more on the way ... we expect anywhere from 1 to 3 more for every function we add to hadoop.) Each of these have about 6 identical command-line parameters, and a couple command line parameters that are similar, all specified with Euclid. The implementations are in a dozen .pm modules. Some of which are common, and others of which are unique.... Currently I'm passing the args globally to each module ... Inside run_something.pl I have: set_common_args (%ARGV); set_something_args (%ARGV); And inside Something.pm I have sub set_something_args { (%MYARGS) =@_; } So then I can do if ( $MYARGS{'--needs_more_beer'} ) { $beer++; } I'm seeing that I'm probably going to have additional "common" files that I'll want to pass args to, so I'll have three or four set_xxx_args calls at the top of each run_something.pl, and it just doesn't seem too elegant. On the other hand, it beats passing the whole stupid argument array down the call chain, and choosing and passing individual elements down the call chain is (a) too much work (b) error-prone (c) doesn't buy much. In lots of ways what I'm doing is just object-oriented design without the object-oriented language trappings, and it looks uglier without said trappings, but nonetheless ... Anyone have thoughts or ideas?

    Read the article

  • C++ Namespaces & templates question

    - by Kotti
    Hi! I have some functions that can be grouped together, but don't belong to some object / entity and therefore can't be treated as methods. So, basically in this situation I would create a new namespace and put the definitions in a header file, the implementation in cpp file. Also (if needed) I would create an anonymous namespace in that cpp file and put all additional functions that don't have to be exposed / included to my namespace's interface there. See the code below (probably not the best example and could be done better with another program architecture, but I just can't think of a better sample...) Sample code (header) namespace algorithm { void HandleCollision(Object* object1, Object* object2); } Sample code (cpp) #include "header" // Anonymous namespace that wraps // routines that are used inside 'algorithm' methods // but don't have to be exposed namespace { void RefractObject(Object* object1) { // Do something with that object // (...) } } namespace algorithm { void HandleCollision(Object* object1, Object* object2) { if (...) RefractObject(object1); } } So far so good. I guess this is a good way to manage my code, but I don't know what should I do if I have some template-based functions and want to do basically the same. If I'm using templates, I have to put all my code in the header file. Ok, but how should I conceal some implementation details then? Like, I want to hide RefractObject function from my interface, but I can't simply remove it's declaration (just because I have all my code in a header file)... The only approach I came up with was something like: Sample code (header) namespace algorithm { // Is still exposed as a part of interface! namespace impl { template <typename T> void RefractObject(T* object1) { // Do something with that object // (...) } } template <typename T, typename Y> void HandleCollision(T* object1, Y* object2) { impl::RefractObject(object1); // Another stuff } } Any ideas how to make this better in terms of code designing?

    Read the article

  • Saving a single entity instead of the entire context - revisited

    - by nite
    I’m looking for a way to have fine grained control over what is saved using Entity Framework, rather than the whole ObjectContext.SaveChanges(). My scenario is pretty straight forward, and I’m quite amazed not catered for in EF – pretty basic in NHibernate and all other data access paradigms I’ve seen. I’m generating a bunch of data (in a WPF UI) and allowing the user to fine tune what is proposed and choose what is actually committed to the database. For the proposed entities I’m: getting a bunch of reference entities (eg languages) via my objectcontext, creating the proposed entities and assigning these reference entities to them (as navigation properties), so by virtue of their relationship to the reference entities they’re implicitly added to the objectconext Trying to create & save individual entites based on the proposed entities. I figure this should be really simple & trivial but everything I’ve tried I’ve hit a brick wall, either I set up another objectcontext & add just the entity I need (it then tries to add the whole graph and fails as it’s on another objectcontext). I’ve tried MergeOptions = NoTracking on my reference entities to try to get the Attach/AddObject not to navigate through these to create a graph, no avail. I've removed the navigation properties from the reference entities. I've tried AcceptAllChanges, that works but pretty useless in practice as I do still want to track & save other entities. In a simple test, I can create 2 of my proposed entities, AddObject the one I want to save and then Detach the one I dont then call SaveChanges, this works but again not great in practice. Following are a few links to some of the nifty ideas which in the end don’t help in the end but illustrate the complexity of EF for something so simple. I’m really looking for a SaveSingle/SaveAtomic method, and think it’s a pretty reasonable & basic ask for any DAL, letalone a cutting edge ORM. http://stackoverflow.com/questions/1301460/saving-a-single-entity-instead-of-the-entire-context www.codeproject.com/KB/architecture/attachobjectgraph.aspx?fid=1534536&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=3071122&fr=1 bernhardelbl.spaces.live.com/blog/cns!DB54AE2C5D84DB78!238.entry

    Read the article

  • Best way to distribute form that can be printed or saved?

    - by Jason Antman
    I need to develop a simple form (intended only for printing) to be filled in by arbitrary end users (i.e. no specialized software). Ideally, I'd like the end-user to be able to save their inputs to the form and update it periodically. It seems that (at least without LiveCycle Enterprise Suite) Adobe Reader won't save data input in a PDF form. Aside from just distributing the form as a Word document, does anyone have any suggestions? Background: I do some work for a volunteer ambulance corps. They have a lot of elderly patients who don't know (or can't remember) their medical history. They want to develop a common form with personal information (name, address, DOB, medications list, etc.) for elderly residents to hang on their refrigerators (apparently a common solution to this problem). As some of them (or their children/grandchildren) are computer literate, it would make most sense to provide a download-able blank form that can be filled in, saved, updated, and re-printed as needed. Due to worries about privacy, HIPAA, etc. anything with server-side generation is out, it needs to be 100% client-side, and in a format that the majority of non-technical computer users can access without additional software. Thanks for any tips... at this point, I'm leaning towards just using a .doc form.

    Read the article

  • NSTask Launch causing crash

    - by tripskeet
    Hi, I have an application that can import an XML file through this terminal command : open /path/to/main\ app.app --args myXML.xml This works great with no issues. And i have used Applescript to launch this command through shell and it works just as well. Yet when try using Cocoa's NSTask Launcher using this code : NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/open"]; [task setCurrentDirectoryPath:@"/Applications/MainApp/InstallData/App/"]; [task setArguments:[NSArray arrayWithObjects:[(NSURL *)foundApplicationURL path], @"--args", @"ImportP.xml", nil]]; [task launch]; the applications will start up to the initial screen and then crash when either the next button is clicked or when trying to close the window. Ive tried using NSAppleScript with this : NSAppleScript *script = [[NSAppleScript alloc] initWithSource:@"tell application \"Terminal\" do script \"open /Applications/MainApp/InstallData/App/Main\\\\ App.app\" end tell"]; NSDictionary *errorInfo; [script executeAndReturnError:&errorInfo]; This will launch the program and it will crash as well and i get this error in my Xcode debug window : 12011-01-04 17:41:28.296 LaunchAppFile[4453:a0f] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper LaunchAppFile: OpenScripting.framework - scripting addition "/Library/ScriptingAdditions/Adobe Unit Types.osax" declares no loadable handlers. So with research i came up with this : NSAppleScript *script = [[NSAppleScript alloc] initWithSource:@"do shell script \"arch -i386 osascript /Applications/MainApp/InstallData/App/test.scpt\""]; NSDictionary *errorInfo; [script executeAndReturnError:&errorInfo]; But this causes the same results as the last command. Any ideas on what causes this crash?

    Read the article

  • Can you recommend an easy to use easy to develop CMS?

    - by el_at_yahoo
    We need some easy way to manage web sites at our company, and we are evaluating some CMS tools for this purpose. We do not yet know what features the sites will need to have (but it will definitely be something with lots of functionalities), so we are looking for something with lots of features and more importantly to be easily extensible (if it does not have some feature, we at least want to be able to build-it by ourselves). We have no experience with Content Management Systems but we do with Java, so it has to be something written in Java. We evaluated some tools and from our perspective the following seem the promising of them (in no particular order): OpenCMS dotCMS (Community Edition vs Enterprise Edition) InfoGlue Alfresco (EE vs CE) Magnolia (EE vs EE Pro vs CE) Jahia (CE vs EE) Since we have no experience with either one of them, we were wondering if someone of you who have can share some information about how good they are or how easily they can be used and extended. I know similar questions have been asked on SO and I also know this is highly subjective and people will vote for closing it as soon as it is posted, but for us it is important to know what difficulties other people have been facing in using the above tools (we don’t want to walk a path that takes nowhere if other people already know it leads nowhere). Others could then vote on the posted answers if they agree or not. From your experience, which from the above mentioned CMSs is the more easily extensible, the easier to use, the easiest to learn etc? Thank you and Happy Holidays to all.

    Read the article

  • An Erroneous SQL Query makes browser hang until script timeout exceeded

    - by Jimbo
    I have an admin page in a Classic ASP web application that allows the admin user to run queries against the database (SQL Server 2000) Whats really strange is that if the query you send has an error in it (an invalid table join, a column you've forgotten to group by etc) the BROWSER hangs (CPU usage goes to maximum) until the SERVER script timeout is exceeded and then spits out a timeout exceeded error (server and browser are on different machines, so not sure how this happens!) I have tried this in IE 8 and FF 3 with the same result. If you run that same query (with errors) directly from SQL Enterprise Manager, it returns the real error immediately. Is this a security feature? Does anyone know how to turn it off? It even happens when the connection to the database is using 'sa' credentials so I dont think its a security setting :( Dim oRS Set oRS = Server.CreateObject("ADODB.Recordset") oRS.ActiveConnection = sConnectionString // run the query - this is for the admin only so doesnt check for sql safe commands etc. oRS.Open Request.Form("txtSQL") If Not oRS.EOF Then // list the field names from the recordset For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).name & "&nbsp;" Next // show the data for each record in the recordset While Not oRS.EOF For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).value & "&nbsp;" Next Response.Write "<br />" oRS.Movenext() Wend End If

    Read the article

  • Mapping one column in a table to multiple tables

    - by user1721814
    I am working on a new product development creating a WMS system. I have done it in past using ASP, VB, and other techniques where we did not hard code the mapping. But now i am working on it using MVC and entity frame work and i am stumped. How can i map one column in transaction table to a column in multiple tables. I have transaction table trans Transid orderref TType productid qty ....(More Columns) now the orderref will hold either Receiptkey, orderkey , movementkey, adjustmentkey and the TType column will tell me which type of transaction i am dealing with and based on that i would know which table to link further. Now how can i achieve this in Entity Frame work. This is the most important step. I have done this many times in my other languages but now using EF i am stuck. Please help. I have checked a lot online but i have not found it. I am new to MVC and entity frame work architecture. Any guidance would be highly appreciated. Ranjit

    Read the article

  • "Detecting" and loading of "plugins" in GAE

    - by Patrick Cornelissen
    Hi! I have a "plugin like" architecture and I want to create one instance of each class that implements a dedicated interface and put these in a cache. (To have a singleton-ish effect). The plugins will be provided as jars and put into the app engine war file before the app is uploaded. I have tried to use the ClassPathScanningCandidateComponentProvider as I'm using spring anyway, but this didn't work. The provider complained that it was not able to find the HttpServletResponse class file while scanning the classpath. I can't get around this, when I add the servlet jar, then I'll get of course problems, because the same jar is also provided by the GAE. If I don't, I'm getting the error above... So I tried to add a static initialization code, but of course this doesn't work, because the class is initialized when it's instantiated for the first time. (Well I knew that but it was worth a try) The last chance I currently see is that I create a properties file with all plugin classes when the package is created, but this requires writing of a maven plugin etc. and I'd like to avoid that. Is there something that I am missing?

    Read the article

  • Click behaviour - Difference in IE and FF ?!

    - by OlliD
    Hey folks, i just came to the conclusion that a project i am currently working on might have a "logical" error in functionality. Currently I'am using server technology with PHP/MySQL and JQuery. Within the page there's a normal link reference with tag <a href="contentpage?page=xxx">next step</a> The pain point now seems to be the given jquery click event on the same element. The intension was to save the (current) content of the page (- form elements) via another php script using the php session command. For any reason, IE can handle the click event of Jquery right before executing the standard html command, that reloads the current page again with the new page parameter. By using FF the behaviour is different. I assume, that FF first execute the html command and afterwards execute the javascript code which handles the click event. Therefore the resultset here is wrong respectivly empty. My question now is whether you made the same experience and how you handled / wordarrounded this problem. I'd be thankful fur any of your tips or further feedback. Maybe you also have a solution on how to rethink about the current architecture. Regards, Oliver

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >