Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 480/530 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • Type-safe generic data structures in plain-old C?

    - by Bradford Larsen
    I have done far more C++ programming than "plain old C" programming. One thing I sorely miss when programming in plain C is type-safe generic data structures, which are provided in C++ via templates. For sake of concreteness, consider a generic singly linked list. In C++, it is a simple matter to define your own template class, and then instantiate it for the types you need. In C, I can think of a few ways of implementing a generic singly linked list: Write the linked list type(s) and supporting procedures once, using void pointers to go around the type system. Write preprocessor macros taking the necessary type names, etc, to generate a type-specific version of the data structure and supporting procedures. Use a more sophisticated, stand-alone tool to generate the code for the types you need. I don't like option 1, as it is subverts the type system, and would likely have worse performance than a specialized type-specific implementation. Using a uniform representation of the data structure for all types, and casting to/from void pointers, so far as I can see, necessitates an indirection that would be avoided by an implementation specialized for the element type. Option 2 doesn't require any extra tools, but it feels somewhat clunky, and could give bad compiler errors when used improperly. Option 3 could give better compiler error messages than option 2, as the specialized data structure code would reside in expanded form that could be opened in an editor and inspected by the programmer (as opposed to code generated by preprocessor macros). However, this option is the most heavyweight, a sort of "poor-man's templates". I have used this approach before, using a simple sed script to specialize a "templated" version of some C code. I would like to program my future "low-level" projects in C rather than C++, but have been frightened by the thought of rewriting common data structures for each specific type. What experience do people have with this issue? Are there good libraries of generic data structures and algorithms in C that do not go with Option 1 (i.e. casting to and from void pointers, which sacrifices type safety and adds a level of indirection)?

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • Ruby integer to string key

    - by Gene
    A system I'm building needs to convert non-negative Ruby integers into shortest-possible UTF-8 string values. The only requirement on the strings is that their lexicographic order be identical to the natural order on integers. What's the best Ruby way to do this? We can assume the integers are 32 bits and the sign bit is 0. This is successful: (i >> 24).chr + ((i >> 16) & 0xff).chr + ((i >> 8) & 0xff).chr + (i & 0xff).chr But it appears to be 1) garbage-intense and 2) ugly. I've also looked at pack solutions, but these don't seem portable due to byte order. FWIW, the application is Redis hash field names. Building keys may be a performance bottleneck, but probably not. This question is mostly about the "Ruby way".

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • Why would one want to use the public constructors on Boolean and similar immutable classes?

    - by Robert J. Walker
    (For the purposes of this question, let us assume that one is intentionally not using auto(un)boxing, either because one is writing pre-Java 1.5 code, or because one feels that autounboxing makes it too easy to create NullPointerExceptions.) Take Boolean, for example. The documentation for the Boolean(boolean) constructor says: Note: It is rarely appropriate to use this constructor. Unless a new instance is required, the static factory valueOf(boolean) is generally a better choice. It is likely to yield significantly better space and time performance. My question is, why would you ever want to get a new instance in the first place? It seems like things would be simpler if constructors like that were private. For example, if they were, you could write this with no danger (even if myBoolean were null): if (myBoolean == Boolean.TRUE) It'd be safe because all true Booleans would be references to Boolean.TRUE and all false Booleans would be references to Boolean.FALSE. But because the constructors are public, someone may have used them, which means that you have to write this instead: if (Boolean.TRUE.equals(myBoolean)) But where it really gets bad is when you want to check two Booleans for equality. Something like this: if (myBooleanA == myBooleanB) ...becomes this: if ( (myBooleanA == null && myBooleanB == null) || (myBooleanA == null && myBooleanA.equals(myBooleanB)) ) I can't think of any reason to have separate instances of these objects which is more compelling than not having to do the nonsense above. What say you?

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • Using array of Action() in a lambda expression

    - by Sean87
    I want to do some performance measurement for a method that does some work with int arrays, so I wrote the following class: public class TimeKeeper { public TimeSpan Measure(Action[] actions) { var watch = new Stopwatch(); watch.Start(); foreach (var action in actions) { action(); } return watch.Elapsed; } } But I can not call the Measure mehotd for the example below: var elpased = new TimeKeeper(); elpased.Measure( () => new Action[] { FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000) }); I get the following errors: Cannot convert lambda expression to type 'System.Action[]' because it is not a delegate type Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Here is the method that works with arrays: private void FillArray(ref int[] array, string name, int count) { array = new int[count]; for (int i = 0; i < array.Length; i++) { array[i] = i; } Console.WriteLine("Array {0} is now filled up with {1} values", name, count); } What I am doing wrong?

    Read the article

  • How to give new life into a five years old, simple but reliable PHP form?

    - by Sam
    Hi all. I have a script in php 5.2. I want to use a simple form. I found something a programmer made for me about 5 years ago. When I use it, PHP outputs an error now unless I set register_long_arrays = On, then it works fine. On the PHP website, however, it says: Warning This feature has been DEPRECATED as of PHP 5.3.0. Relying on this feature is highly discouraged. It's recommended to turn them off, for performance reasons. Instead, use the superglobal arrays, like $_GET. Should I listen to PHP's warning, or just enable the option and keep using my old form happily? If the former, then how/where do I change this simple form, so it does not rely on the deprecated setting? Your answer is much appreciated. form.htm <html><body> <form method="POST" action="form_sent.php"> ... </form> </body></html> form_sent.php <html><body> <?php $email = $HTTP_POST_VARS[email]; $mailto = "[email protected]"; $mailsubj = "A Form was Sent from Website!"; $mailhead = "From: $email\n"; reset ($HTTP_POST_VARS); $mailbody = "Values submitted from web site form:\n"; while (list($key, $val) = each ($HTTP_POST_VARS)){$mailbody .= "$key : $val\n";} if (!eregi("\n",$HTTP_POST_VARS[email])) { mail($mailto, $mailsubj, $mailbody, $mailhead); } ?> <b>Form Sent. Thank you.</b> </body></html>

    Read the article

  • Using an unencoded key vs a real Key, benefits?

    - by user246114
    Hi, I am reading the docs for Key generation in app engine. I'm not sure what effect using a simple String key has over a real Key. For example, when my users sign up, they must supply a unique username: class User { /** Key type = unencoded string. */ @PrimaryKey private String name; } now if I understand the docs correctly, I should still be able to generate named keys and entity groups using this, right?: // Find an instance of this entity: User user = pm.findObjectById(User.class, "myusername"); // Create a new obj and put it in same entity group: Key key = new KeyFactory.Builder( User.class.getSimpleName(), "myusername") .addChild(Goat.class.getSimpleName(), "baa").getKey(); Goat goat = new Goat(); goat.setKey(key); pm.makePersistent(goat); the Goat instance should now be in the same entity group as that User, right? I mean there's no problem with leaving the User's primary key as just the raw String? Is there a performance benefit to using a Key though? Should I update to: class User { /** Key type = unencoded string. */ @PrimaryKey private Key key; } // Generate like: Key key = KeyFactory.createKey( User.class.getSimpleName(), "myusername"); user.setKey(key); it's almost the same thing, I'd still just be generating the Key using the unique username anyway, Thanks

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • VB6 Game Development : Don't ask me why :-)

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do provide solutions with max performance and with as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) Thank You

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • Custom ADO.NET provider to intercept and modify sql queries.

    - by Faisal
    Our client has an application that stores blobs in database which has now grown enough to impact the performance of SQL Server. To overcome this issue, we are planning to offload all blobs to file system and leave the path of file in a new column in user table. Like if user has a table docs with columns id, name and content (blob); we would ask him to add a new column 'filepath' in this table. Our client is willing to make this change in this database. But when it comes to changing the sql queries to read and write into this table, they are not ready to accep this. Actually, they don't want any change that results in recompilation and deployment. Now we are planning to write a custom ADO.NET provider that will intercept the select queries add a column 'filepath' at the end of the select statement retieve the result set and modify the 'content' column value based on 'filepath' value Is there any use case that you think will certainly fail with this approach? I know this sounds dirty but do we have a better way?

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • Using same log4j logger in standalone java application

    - by ktaylorjohn
    I have some code which is a standalone java application comprising of 30+ classes. Most of these inherit from some other base classes. Each and every class has this method to get and use a log4j logger public static Logger getLogger() { if (logger != null) return logger; try { PropertyUtil propUtil = PropertyUtil.getInstance("app-log.properties"); if (propUtil != null && propUtil.getProperties() != null) PropertyConfigurator.configure(propUtil.getProperties ()); logger = Logger.getLogger(ExtractData.class); return logger; } catch (Exception exception) { exception.printStackTrace(); } } A) My question is whether this should be refactored to some common logger which is initialized once and used across by all classes? Is that a better practice? B) If yes, how can this be done ? How can I pass the logger around ? C) This is actually being used in the code not as Logger.debug() but getLogger().debug(). What is the impact of this in terms of performance?

    Read the article

  • What is better for a student programming in C++ to learn for writing GUI: C# vs QT?

    - by flashnik
    I'm a teacher(instructor) of CS in the university. The course is based on Cormen and Knuth and students program algorithms in C++. But sometimes it is good to show how an algorithm works or just a result of task through GUI. Also in my opinion it's very imporant to be able to write full programs. They will have courses concerning GUI but a three years, later, in fact, before graduatuion. I think that they should be able to write simple GUI applications earlier. So I want to teach them it. How do you think, what is more useful for them to learn: programming GUI with QT or writing GUI in C# and calling unmanaged C++ library? Update. For developing C++ applications students use MS Visual studio, so C# is already installed. But QT AFAIK also can be integrated into VS. I have following pros of C# (some were suggested there in answers): The need to make an additional layer. It's more work, but it forces you explicitly specify contract between GUI and processing data. The border between GUI and algorithms becomes very clear. It's more popular among employers. At least, in Russia where we live. It's rather common to write performance-critical algorithms in C++ and PInvoke them from well-looking C# application/ASP.Net website. Maybe it is not so widespread in the rest of the world but in Russia Windows is very popular, especially in companies and corporations due to some reasons, so most of b2b applications are Windows applications. Rapid development. It's much quicker to code in .Net then in C++ due to many reasons. And the con is that it's a new language with own specific for students. And the mess with invoking calls to library.

    Read the article

  • How to get a category listing from Magento?

    - by alex
    I want to create a page in Magento that shows a visual representation of the categories.. example CATEGORY product 1 product 2 ANOTHER CATEGORY product 3 My problem is, their database is organised very differently to what I've seen in the past. They have tables dedicated to data types like varchar, int, etc. I assume this is for performance or similiar. I haven't found a way to use MySQL to query the database and get a list of categories. I'd then like to match these categories to products, to get a listing of products for each category. Unfortunately Magento seems to make this very difficult. Also I have not found a method that will work from within a page block.. I have created showcase.phtml and put it in the XML layout and it displays and runs it's PHP code. I was hoping for something easy like looping through $this-getAllCategories(); and then a nested loop inside with something like $category-getChildProducts(); Can anyone help me? Thank you!

    Read the article

  • CI + Joomla 1.5

    - by DMin
    Hi, This is something that I just cooked up with Joomla and CodeIgniter(CI). I Wrote my database intensive application in CodeIgniter and frontend is Joomla. I'm using Jumi(Joomla Extention) so I can include the CI files inside joomla to basically insert the content generated by CI into Joomla articles. Problem is, you can't include CI files directly using JUMI from joomla because CI tends to route the pages so instead of seeing your joomla page with the CI content, you be redirected to the CI page itself. I did a little work around for this : Made an additional page that just basically does cURL to the CI page - gets the data and echos it out. From jumi, I include this cURL page instead. Couple of questions: I've seen at least a few posts that CI + Joomla is difficult to do(link). 1) Do you see any glaring security issues or possible performance issues? 2) Do you know of a better way to implement this? 3) What do you think of this? Do you think this is a good way to do this? There is one component out there that plugs CI with Joomla but it requires you to have a fresh CI install. It allows only one controller & the download link is down as well.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >