Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 323/925 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • Differences in MS Office Charts

    - by simendsjo
    I'm about to do some Office integration creating charts from some data-sources and adding them to PPT slides. But some coworkers are saying using PPT charts is suboptimal as they are missing features of Excel charts, and are different in many ways. They're unable to come up with examples, and so am I... I found the following blog about Office2007, saying there are some differences in the programming model, but that they all use the same underlying engine. Are there really any differences in the capabilities of the charts? Is it mostly UI issues? What features are different/missing from PPT charts? Are these issues resolved in Office2010?

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Is there a module that implements an efficient array type in Erlang?

    - by dsmith
    I have been looking for an array type with the following characteristics in Erlang. append(vector(), term()) O(1) nth(Idx, vector()) O(1) set(Idx, vector(), term()) O(1) insert(Idx, vector(), term()) O(N) remove(Idx, vector()) O(N) I normally use a tuple for this purpose, but the performance characteristics are not what I would want for large N. My testing shows the following performance characteristics... erlang:append_element/2 O(N). erlang:setelement/3 O(N). I have started on a module based on the clojure.lang.PersistentVector implementation, but if it's already been done I won't reinvent the wheel.

    Read the article

  • OpenLayers FramedCloud Autosizing

    - by Kyle
    According to the documentation, I should be able to configure the size of my OpenLayers popup by declaring an OpenLayers.Size object in the FramedCloud constructor: this.popup = new OpenLayers.Popup.FramedCloud('featurePopup', this.options.feature.geometry.getBounds().getCenterLonLat(), new OpenLayers.Size(80, 60), this.template(this.formattedAttrs()), null, true, this.onPopupClose ); Currently the popup that is rendered is autosized no matter what dimensions I use in the constructor. I've tried manually setting the autosizing attribute of the FramedCloud to false as well as manually adjusting the css styling for the popup without achieving the results I need. I checked and found some similar issues in the OpenLayers 2.11 issues list, but I haven't found a workaround. Any ideas?

    Read the article

  • Eval IronPython Scripts during ASP.NET Web Request; Static Engine or Not

    - by Josh Pearce
    I would like to create an ASP.NET MVC web application which has extensible logic that does not require a re-build. I was thinking of creating a filter which had an instance of the IronPython engine. What I would like to know is: how much overhead is there in creating a new engine during each web request, and would it be a better idea to keep a static engine around? However, if I were to keep a single static engine around, what are the issues I might run into as far as locking and script scope? Is it possible to have multiple scopes in the same IropPython engine so I don't get variable collision and security issues between web requests?

    Read the article

  • jKey (JavaScript key shortcut plugin) Issue

    - by Oscar Godson
    Me and a friend are writing a plugin for jQuery that makes it easy for devs to add key shortcuts and we're damn close but no cigar. We're having issues with the key combos. It seems like we are having issues when you call the same selector multiple times on a page. Try pressing alt+a... youll see it works one time, then gets all mangled up. Anyone know how to fix it? It'll be on github after it's corrected and I'd be happy to add "thank you to" link to whoever can fix this in the header with the copyright info :) It's nicely documented and i have all the code and stuff here. So... anyone? http://jsbin.com/azaha4

    Read the article

  • Silverlight Binding: User controls inside datagrid.

    - by Eric
    I have a DataGrid in my silverlight app that has a few columns. A couple basic columns bound with no issues. One column has a UserControl in it and the XAML is as follows: <data:DataGridTemplateColumn Header="" CanUserSort="True" Width="107"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <local:StaticPageEnlistment EnlistmentName="{Binding SiteName}" Width="400" Height="150"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> So I have a public property that's a string called EnlistmentName that I have bound to the SiteName value. I use this same "{Binding SiteName}" in all my other colums with no issues, why can't the user control accept the same binding string?

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Java - is this an idiom or pattern, behavior classes with no state

    - by Berlin Brown
    I am trying to incorporate more functional programming idioms into my java development. One pattern that I like the most and avoids side effects is building classes that have behavior but they don't necessarily have any state. The behavior is locked into the methods but they only act on the parameters passed in. The code below is code I am trying to avoid: public class BadObject { private Map<String, String> data = new HashMap<String, String>(); public BadObject() { data.put("data", "data"); } /** * Act on the data class. But this is bad because we can't * rely on the integrity of the object's state. */ public void execute() { data.get("data").toString(); } } The code below is nothing special but I am acting on the parameters and state is contained within that class. We still may run into issues with this class but that is an issue with the method and the state of the data, we can address issues in the routine as opposed to not trusting the entire object. Is this some form of idiom? Is this similar to any pattern that you use? public class SemiStatefulOOP { /** * Private class implies that I can access the members of the <code>Data</code> class * within the <code>SemiStatefulOOP</code> class and I can also access * the getData method from some other class. * * @see Test1 * */ class Data { protected int counter = 0; public int getData() { return counter; } public String toString() { return Integer.toString(counter); } } /** * Act on the data class. */ public void execute(final Data data) { data.counter++; } /** * Act on the data class. */ public void updateStateWithCallToService(final Data data) { data.counter++; } /** * Similar to CLOS (Common Lisp Object System) make instance. */ public Data makeInstance() { return new Data(); } } // End of Class // Issues with the code above: I wanted to declare the Data class private, but then I can't really reference it outside of the class: I can't override the SemiStateful class and access the private members. Usage: final SemiStatefulOOP someObject = new SemiStatefulOOP(); final SemiStatefulOOP.Data data = someObject.makeInstance(); someObject.execute(data); someObject.updateStateWithCallToService(data);

    Read the article

  • How to set a bean property before executing this f:event listener

    - by user
    How to set a bean property from jsf page before executing this f:event listener: <f:event type="preRenderComponent" listener="bean.method}"/> I tried the below code but it does not set the value to the bean property. <f:event type="preRenderComponent" listener="bean.method}"> <f:setPropertyActionListener target="#{bean.howMany}" value="2"/> </f:event> JSF2.1.6 with PF 3.3 EDIT Any issues with this below code? (This works! but I just want to confirm if there are any issues with this!?) <f:event type="preRenderComponent" listener="#{bean.setHowMany(15)}"/> <f:event type="preRenderComponent" listener="#{bean.method}"/>

    Read the article

  • Inline function and calling cost in C

    - by Eonil
    I'm making a vector/matrix library. (GCC, ARM NEON, iPhone) typedef struct{ float v[4]; } Vector; typedef struct{ Vector v[4]; } Matrix; I passed struct data as pointer to avoid performance degrade from data copying when calling function. So I thought designed function like this: void makeTranslation(const Vector* factor, Matrix* restrict result); But, if function is inline, is there any reason to pass values as pointer for performance? Do those variables copied too? How about register and caches? inline Matrix makeTranslation(Vector factor) __attribute__ ((always_inline)); How do you think about calling costs of each cases?

    Read the article

  • jQuery UI Slider Problem

    - by OneNerd
    I have been using jQuery sliders for about a week now without issues in my project, but I just hit an issue. I am adding 3 sliders to my page All 3 are added exact same way (like this): $('#slider_id').slider({value:100,'slide':function(e, ui){// some code}}); 2 work properly One does not work (it gives me a fiebug error 'f is undefined') when I drag the slider handle The only glaring difference I can see is that the one giving the error is inside of a jQuery UI dialog(). Interestingly, when I place it outside of the dialog, it works! So, wondering if there are known issues with sliders inside dialogs, and/or if there are any workarounds. Thanks

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Model of hql query firing at back end by hql engine?

    - by Maddy.Shik
    I want to understand how hibernate execute hql query internally or in other models how hql query engine works. Please suggest some good links for same? One of reason for reading is following problem. Class Branch { //lazy loaded @joincolumn(name="company_id") Company company; } Since company is heavy object so it is lazy loaded. now i have hql query "from Branch as branch where branch.Company.id=:companyId" my concern is that if for firing above query, hql engine has to retrieve company object then its a performance hit and i would prefer to add one more property in Branch class i.e. companyId. So in this case hql query would be "from Branch as branch where branch.companyId=:companyId" If hql engine first generate sql from hql followed by firing of sql query itself, then there should be no performance issue. Please let me know if problem is not understandable.

    Read the article

  • How does c# type safety affect the garbage collection?

    - by Indeera
    I'm dealing with code that handles large buffers ( 100MB) and manipulation of these is done in unsafe blocks. I'd like to refactor these to avoid unsafe code. I'm wondering about the likely memory performance gains (positive/negative/neutral) before I embark on that. I assert that if the compiler can verify types, it could possibly generate better code and that could also mean good GC performance. Is this a valid assertion? What is your experience? Thanks.

    Read the article

  • Is a program compiled with -g gcc flag slower than the same program compiled without -g?

    - by e271p314
    I'm compiling a program with -O3 for performance and -g for debug symbols (in case of crash I can use the core dump). One thing bothers me a lot, does the -g option results in a performance penalty? When I look on the output of the compilation with and without -g, I see that the output without -g is 80% smaller than the output of the compilation with -g. If the extra space goes for the debug symbols, I don't care about it (I guess) since this part is not used during runtime. But if for each instruction in the compilation output without -g I need to do 4 more instructions in the compilation output with -g than I certainly prefer to stop using -g option even at the cost of not being able to process core dumps. How to know the size of the debug symbols section inside the program and in general does compilation with -g creates a program which runs slower than the same code compiled without -g?

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • Real pagination vs Next and Previous buttons

    - by Pablo
    By real pagination i mean something like this when in page 3: <<Previous 1 | 2 | {3} | 4 | 5 |...| 15 | Next>> By Next and Previous buttons i mean something like this when in page 3: <<previous Next>> Performance wise im sure the Previous and Next Buttons are better since unlike the real pagination it doesn't require over-querying the database. By over-querying the database i mean getting more information from the database than what you will need to display on the page. My theory is that the Previous and Next Buttons can drastically increase a site performance as it only requires the exact information you will need to display on a page, please correct me if im wrong on this. so, do users really have preference when it comes to this two options? is it just a Developer preference and its convenience? Which one do you prefer? why? *Note: Previous and Next Buttons are usually labeled Newer and older.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >