Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 461/527 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • java time check API

    - by ring bearer
    I am sure this was done 1000 times in 1000 different places. The question is I want to know if there is a better/standard/faster way to check if current "time" is between two time values given in hh:mm:ss format. For example, my big business logic should not run between 18:00:00 and 18:30:00. So here is what I had in mind: public static boolean isCurrentTimeBetween(String starthhmmss, String endhhmmss) throws ParseException{ DateFormat hhmmssFormat = new SimpleDateFormat("yyyyMMddhh:mm:ss"); Date now = new Date(); String yyyMMdd = hhmmssFormat.format(now).substring(0, 8); return(hhmmssFormat.parse(yyyMMdd+starthhmmss).before(now) && hhmmssFormat.parse(yyyMMdd+endhhmmss).after(now)); } Example test case: String doNotRunBetween="18:00:00,18:30:00";//read from props file String[] hhmmss = downTime.split(","); if(isCurrentTimeBetween(hhmmss[0], hhmmss[1])){ System.out.println("NOT OK TO RUN"); }else{ System.out.println("OK TO RUN"); } What I am looking for is code that is better in performance in looks in correctness What I am not looking for third-party libraries Exception handling debate variable naming conventions method modifier issues

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Self-referencing tables in Linq2Sql

    - by J-Man
    Hi, I've seen a lot of questions on self-referencing tables in Linq2Sql and how to eagerly load all child records for a particular root object. I've implemented a temporary solution by accessing all underlying properties, but you can see that this doesn't do the performance any good. The thing is though, that all records are correlated with each-other using a correlation GUID. Example below: RootElement - Id: 1 - ParentId: null - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 2 - ParentId: 1 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement2 - Id: 3 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 4 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD In my case, I do have access to the correlationId, so I can retrieve all of my records by performing the following query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' select element; But, of course, I want these elements associated with each other by executing this query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' && element.ParentId == null select element; My question is: is it possible to combine the results the first query as some sort of 'caching mechanism' for the query where I get the root element? Thanks for the input. J.

    Read the article

  • Autorotation and multiple view controllers

    - by alku83
    I'm creating an iPad application which should only work in portrait and portait upsidedown modes. For performance reasons in my applicationDidFinishLaunching method I'm creating several viewControllers, and adding them to my main window as subviews. I then hide the ones I don't want to see straight away. There is no tab bar or navigation controller. My problem is that only the first viewController seems to be receiving the rotate calls. I have verified this by swapping around the order in which I add the subviews to the main window and NSLog's. Is there some way I can force all the controllers to receive the calls? Some of my views are designed to lay over the top of another view, but this behind view will not always be the same one - so it seems to make sense to have the overlay view in a separate view controller. Am I doing something fundamentally wrong, and that's why it's not behaving as I would expect? EDIT: The accepted answer for this question seems to indicate the exact problem I'm facing: http://stackoverflow.com/questions/548142/uiviewcontroller-rotate-methods

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

  • CPU friendly infinite loop

    - by Adi
    Writing an infinite loop is simple: while(true){ //add whatever break condition here } But this will trash the CPU performance. This execution thread will take as much as possible from CPU's power. What is the best way to lower the impact on CPU? Adding some Thread.Sleep(n) should do the trick, but setting a high timeout value for Sleep() method may indicate an unresponsive application to the operating system. Let's say I need to perform a task each minute or so in a console app. I need to keep Main() running in an "infinite loop" while a timer will fire the event that will do the job. I would like to keep Main() with the lowest impact on CPU. What methods do you suggest. Sleep() can be ok, but as I already mentioned, this might indicate an unresponsive thread to the operating system. LATER EDIT: I want to explain better what I am looking for: I need a console app not Windows service. Console apps can simulate the Windows services on Windows Mobile 6.x systems with Compact Framework. I need a way to keep the app alive as long as the Windows Mobile device is running. We all know that the console app runs as long as its static Main() function runs, so I need a way to prevent Main() function exit. In special situations (like: updating the app), I need to request the app to stop, so I need to infinitely loop and test for some exit condition. For example, this is why Console.ReadLine() is no use for me. There is no exit condition check. Regarding the above, I still want Main() function as resource friendly as possible. Let asside the fingerprint of the function that checks for the exit condition.

    Read the article

  • Executing remote script - Architecture

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • Python Django Global Variables

    - by Joe J
    Hi all, I'm looking for simple but recommended way in Django to store a variable in memory only. When Apache restarts or the Django development server restarts, the variable is reset back to 0. More specifically, I want to count how many times a particular action takes place on each model instance (database record), but for performance reasons, I don't want to store these counts in the database. I don't care if the counts disappear after a server restart. But as long as the server is up, I want these counts to be consistent between the Django shell and the web interface, and I want to be able to return how many times the action has taken place on each model instance. I don't want the variables to be associated with a user or session because I might want to return these counts without being logged in (and I want the counts to be consistent no matter what user is logged in). Am I describing a global variable? If so, how do I use one in Django? I've noticed the files like the urls.py, settings.py and models.py seems to be parsed only once per server startup (by contrast to the views.py which seems to be parsed eache time a request is made). Does this mean I should declare my variables in one of those files? Or should I store it in a model attribute somehow (as long as it sticks around for as long as the server is running)? This is probably an easy question, but I'm just not sure how it's done in Django. Any comments or advice is much appreciated. Thanks, Joe

    Read the article

  • When optimizing database queries, what exactly is the relationship between number of queries and siz

    - by williamjones
    To optimize application speed, everyone always advises to minimize the number of queries an application makes to the database, consolidating them into fewer queries that retrieve more wherever possible. However, this also always comes with the caution that data transferred is still data transferred, and just because you are making fewer queries doesn't make the data transferred free. I'm in a situation where I can over-include on the query in order to cut down the number of queries, and simply remove the unwanted data in the application code. Is there any type of a rule of thumb on how much of a cost there is to each query, to know when to optimize number of queries versus size of queries? I've tried to Google for objective performance analysis data, but surprisingly haven't been able to find anything like that. Clearly this relationship will change for factors such as when the database grows in size, making this somewhat individualized, but surely this is not so individualized that a broad sense of the landscape can't be drawn out? I'm looking for general answers, but for what it's worth, I'm running an application on Heroku.com, which means Ruby on Rails with a Postgres database.

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Why can't I use accented characters next to a word boundary?

    - by Rexxars
    I'm trying to make a dynamic regex that matches a persons name. It works without problems on most names, until I ran into accented characters at the end of the name. Example: Some Fancy Namé The regex I've used so far is: /\b(Fancy Namé|Namé)\b/i Used like this: "Goal: Some Fancy Namé. Awesome.".replace(/\b(Fancy Namé|Namé)\b/i, '<a href="#">$1</a>'); This simply won't match. If I replace the é with a e, it matches just fine. If I try to match a name such as "Some Fancy Naméa", it works just fine. If I remove the word last word boundary anchor, it works just fine. Why doesn't the word boundary flag work here? Any suggestions on how I would get around this problem? I have concidered using something like this, but I'm not sure what the performance penalties would be like: "Some fancy namé. Allow me to ellaborate.".replace(/([\s.,!?])(fancy namé|namé)([\s.,!?]|$)/g, '$1<a href="#">$2</a>$3') Suggestions? Ideas?

    Read the article

  • Telerik vs. Infragistics for Silverlight

    - by JeffN825
    Yes, this is certainly a duplicate question, but I wanted to get some fresh takes. My impression is that Telerik is a much more complete suite, but I'm really really turned off by the responsiveness of their controls. It just seems "clunky" in terms of responsiveness (I have a very fast computer and video card). Scrolling in a grid and transitions chunk, even in their latest demos where they claim to have good performance. I do like that their WPF suite matches their SL one in terms of API. Infragistics has fewer controls and less theming possibilities, but their controls are very responsive. Scrolling in a grid is fluid, as are their combo menus and all the other controls. I checked out ComponentOne and their controls seem analogous to Telerik's in terms of the points mentioned above but are a little less "pretty". Any thoughts from other users of these suites? Basically, what I'm looking for is a suite that will be highly performant and responsive, relatively customizable from a theming standpoint, and have enough functionality to develop a LOB SL application without having to use multiple suites to satisfy the majority of common requirements.

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • how to manage a "resource" array efficiently

    - by Haiyuan Zhang
    The senario of my question is that one need to use a fixed size of array to keep track of certain number of "objects" . The object here can be as simply as a integer or as complex as very fancy data structure. And "keep track" here means to allocate one object when other part of the app need one instance of object and recyle it for future allocation when one instance of the object is returned .Finally ,let me use c++ to put my problme in a more descriptive way . #define MAX 65535 /* 65535 just indicate that many items should be handled . performance demanding! */ typedef struct { int item ; }Item_t; Item_t items[MAX] ; class itemManager { private : /* up to you.... */ public : int get() ; /* get one index to a free Item_t in items */ bool put(int index) ; /* recyle one Item_t indicate by one index in items */ } how will you implement the two public functions of itemManager ? it's up to you to add any private member .

    Read the article

  • Autocomplete and IE7 - slowness, sluggishness as overall pagesize grows?

    - by wchrisjohnson
    Hi, I have the autocomplete plugin (http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/, version 1.1 and 1.0.2) on a project to add pieces of "equipment" to a "project". On a fresh project the plugin works great; the data returned from the database comes back FAST, you can scroll the list fast, and can select an item and move on to the next one. Once I have a project established with equipment on it, and I go to add equipment, the performance is pretty bad. It takes 4-5 seconds to get the list of data back from the server, scrolling the list is painful, and the cursor takes several seconds to settle on an item. Repainting the page after the list goes away is slow. This is occurring in IE7, latest version. FF3 and Chrome are fine, very snappy. The pagesize is about 40K overall. I'm thinking this is an issue with the IE7 Javascript engine, or an edge case with this plugin and IE7; it works quickly enough in FF3+. I would appreciate any ideas, solutions, known issues, or thoughts on how to more specifically pin this down. I'd love to post sample code, but this is a corporate app, and I'm not how useful it would be given that the server side piece cannot be shown; ie: you can't pull it down and test it like a self contained piece of code.. Thanks in advance! Chris

    Read the article

  • What are the drawbacks of using PHP to create variables in my CSS stylesheet?

    - by Greg
    One significant drawback of CSS is that one can't use variables. For example, I'd like to use variables to control the location of imported CSS, and it would be awesome to create variables for colors that are used repeatedly in a design. One approach is to use a PHP file for the CSS stylesheet. In other words, create a "style.php" with... <?php header("Content-type: text/css"); ?> ...at the top of the file, and then link to it using... <link href="style.php" rel="stylesheet" type="text/css" /> ...in any file that uses these styles. So what's the catch? I think it might be performance -- I did a few quick experiments in Firefox/Firebug and as one would expect, the CSS stylesheet is cached, but the PHP stylesheet isn't. So we're paying the price of an additional GET. The other annoying thing is that TextMate does not syntax highlight properly for CSS in a .php file. Are there other drawbacks? Have you used this approach, and if so, would you recommend it?

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • Design suggestions for creating document management structure using hidden shares.

    - by focus.nz
    I need to add some document management functionality into my software. Documents will be grouped by company name and project name. The folders need to be accessed by the application using the id numbers of clients/projects, but also easily browsed by the end user using windows explorer. Clients and Projects will be stored in a database. I am thinking of having the software create the folders using the friendly name and then using a hidden share with the id number for the software to access the files. The folder structure would be something like this --Company 1 (Company-1234$) -- Project 101 (Project-101$) -- Project 102 (Project-102$) -- Project 103 (Project-103$) -- Company 2 (Company-5678$) -- Project 201 (Project-201$) -- Project 202 (Project-202$) -- Project 203 (Project-203$) So in the example above there would be a company called "Company 1" with a ID of "1234". When browsing the folders using windows explorer the user would see \\ServerName\Documents\Company1 and you could also access the same folder from \\ServerName\Documents\Company-1234$ By using the hidden share, if the company name changes or its renamed for some reason it doesn't break the link in the application because its using the hidden shared based on the ID that never changes. Will having hundreds (maybe thousands) or hidden shares on a server provide a huge performance hit? Does any one have any suggestions or alternatives to provide this feature?

    Read the article

  • How to build Graceful Degradation AJAX web page?

    - by Morgan Cheng
    I want to build web page with "Graceful Degradation". That is, the web page functions even javascript is disabled. Now I have to make design decision on the format of AJAX response. If javascript is disabled, each HTTP request to server will generate HTML as response. The browser refreshes with the returned HTML. That's fine. If javascript is enabled, each AJAX HTTP request to server will generate ... well, JSON or HTML. If it is HTML, it is easy to implement. Just have javascript to replace part of page with the returned HTML. And, in server side, not much code change is needed. If it is JSON, then I have to implement JSON-to-html logic in javascript again which is almost duplicate of server side logic. Duplication is evil. I really don't like it. The benefit is that the bandwidth usage is better than HTML, which brings better performance. So, what's the best solution to Graceful Degradation? AJAX request better to return JSON or HTML?

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • encrypt apache and mysql servers

    - by stormdrain
    I have a question about encrypting disks. I have 2 servers: 1 is apache for web/frontend and it talks to server 2 which is mysql. They are all for intranet only; no external access. I was looking into using PGP or GnuPG to encrypt the disks. I'm not clear, though, as to exactly how this would work. Where would the keys be stored? On the client? On apache? If there is a key on apache to access mysql, does there need to be a key for each user? If so, if key 1 is used to alter some data, would then that data be inaccessible to a user using key 2? And the apache key, would that only be accessible to users with local keys? Is encryption done on the fly? Does it degrade performance? What would be the best approach to encrypt the data on these servers, but have them accessible to users? Thanks!

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >