Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 458/527 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • Execute remote Lua Script

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • Prefuse: Reloading of XML files

    - by John
    Hello all, I am a new to the prefuse visualization toolkit and have a couple of general questions. For my purpose, I would like to perform an initial visualization using prefuse (graphview / graphml). Once rendered, upon a user click of a node, I would like to completely reload a new xml file for a new visualization. I want to do this in order to allow me to "pre-package" graphs for display. For example. If I search for Ted. I would like to have an xml file relating to Ted load and render a display. Now in the display I see that Ted has nodes associated called Bill and Joe. When I click Joe, I would like to clear the display and load an xml file associated with Joe. And so on. I have looked into loading one very large xml file containing all node and node relationship info and allowing prefuse to handle this using the hops from one level to another. However, eventually I am sure that system performance issues will arise due to the size of data. Thanks in advance for any help, John

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • ajax delay load UserControl asp.net

    - by user196202
    regarding ajax delay load of usercontrols (or any controls) on Post at Encosia.com : http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/ I tried to implement it , but I noticed that it can be done only for simple controls or UserControls that Have simple asp.net controls (or html tags) . But when it involved with advanced dynamic ajax control (like ajaxControlToolkit or Telerik controls) that have javascripts inside them This method of injecting the html code to the .InnerHtml property of div tag (for example) IS NOT WORKING , and I red about it that The browser need to load the script on load and after that it won't iterperate the scripts injectd via .InnerHtml. So I attached here example of delay load project (from encosia.com by dave ward) with my modification (look at DefaultPopup.aspx and beforePopup.aspx and AfterPopup.aspx) Which I modified the RssReader to show listview with popup items (which is implemented via ACT HoverMenuExtender ) So in the regular way the popup items are shown right , but on the delay load which is done by creating virtual page for rendering the html and injecting it to .InnerHtml property – This ISN'T WORKING. So my question is : is there a way to do delay loading for controls which include scripts lik ACT and Telerik and others? And for the ajax templates – if I need to inject advanced control to the page – how I do it with your approach? Thanks very much (I can't attach here files so everyone please ask me by mail ([email protected]) and i'll send it to him. ) Zahi Kramer

    Read the article

  • Hibernate: order multiple one-to-many relations

    - by Markos Fragkakis
    I have a search screen, using JSF, JBoss Seam and Hibernate underneath. There are columns for A, B and C, where the relations are as follows: A (1< -- ) B (1< -- ) C A has a List< B and B has a List< C (both relations are one-to-many). The UI table supports ordering by any column (ASC or DESC), so I want the results of the query to be ordered. This is the reason I used Lists in the model. However, I got an exception that Hibernate cannot eagerly fetch multiple bags (it considers both lists to be bags). There is an interesting blog post here, and they identify the following solutions: Use @IndexColumn annotation (there is none in my DB, and what's more, I want the position of results to be determined by the ordering, not by an index column) Fetch lazily (for performance reasons, I need eager fetching) Change List to Set So, I changed the List to Set, which by the way is more correct, model-wise. First, if don't use @OrderBy, the PersistentSet returned by Hibernate wraps a HashSet, which has no ordering. Second, If I do use @OrderBy, the PersistentSet wraps a LinkedHashSet, which is what I would like, but the OrderBy property is hardcoded, so all other ordering I perform through the UI comes after it. I tried again with Sets, and used SortedSet (and its implementation, TreeSet), but I have some issues: I want ordering to take place in the DB, and not in-memory, which is what TreeSet does (either through a Comparator, or through the Comparable interface of the elements). Second, I found that there is the Hibernate annotation @Sort, which has a SortOrder.UNSORTED and you can also set a Comparator. I still haven't managed to make it compile, but I am still not convinced it is what I need. Any advice?

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • c# (wcf) architecture file and directory structure (and instantiation)

    - by stevenrosscampbell
    Hello and thanks for any assistance. I have a wcf service that I'm trying to properly modularize. I'm interested in finding out if there is a better way or implementing the file and directory structure along with instanciatation, is there a more appropriate way of abstraction that I may be missing? Is this the best approach? especially if performance and the ability to handle thousands of simultanious request? Currently I have this following structure: -Root\Service.cs public class Service : IService { public void CreateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Create(customer); } public void UpdateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Update(customer); } } -Root\Customer\CustomerService.cs pulbic class CustomerService { public void Create(Customer customer) { //DO SOMETHING } public void Update(Customer customer) { //DO SOMETHING } public void Delete(int customerId) { //DO SOMETHING } public Customer Retrieve(int customerId) { //DO SOMETHING } } Note: I do not include the Customer Object or the DataAccess libraries in this example as I am only concerned about the service. If you could either let me know what you think, if you know a better way, or a resource that could help out. Thanks Kindly. Steven

    Read the article

  • choosing row (from group) with max value in a SQL Server Database

    - by sriehl
    I have a large database and am putting together a report of the data. I have aggregated and summed the data from many tables to get two tables that look like the following. id | code | value id | code | value 13 | AA | 0.5 13 | AC | 2.0 13 | AB | 1.0 14 | AB | 1.5 14 | AA | 2.0 13 | AA | 0.5 15 | AB | 0.5 15 | AB | 3.0 15 | AD | 1.5 15 | AA | 1.0 I need to get a list of id's, with the code (sumed from both tables) with the largest value. 13 | AC 14 | AA 15 | AB There are 4-6 thousand records and it is not possible to change the original tables. I'm not too worried about performance as I only need to run it a few times a year. edit: Let me see if I can explain a bit more clearly, imagine the id is the customer id, the code is who they ordered from and the value is how much they spent there. I need a list of the all the customer id's and the store that customer spent the most money at (and if they spent the same at two different stores, put a value such as 'ZZ' in for the store name).

    Read the article

  • Relay WCF Service

    - by Matt Ruwe
    This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration. We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running. Outside the firewall: Silverlight Application In the DMZ: WCF Service (Business Logic & Data Access Layer) Inside the LAN: Database I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ. Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated. Thanks, Matt

    Read the article

  • Interface Casting vs. Class Casting

    - by Legatou
    I've been led to believe that casting can, in certain circumstances, become a measurable hindrance on performance. This may be moreso the case when we start dealing with incoherent webs of nasty exception throwing\catching. Given that I wish to create more correct heuristics when it comes to programming, I've been prompted to ask this question to the .NET gurus out there: Is interface casting faster than class casting? To give a code example, let's say this exists: public interface IEntity { IParent DaddyMommy { get; } } public interface IParent : IEntity { } public class Parent : Entity, IParent { } public class Entity : IEntity { public IParent DaddyMommy { get; protected set; } public IParent AdamEve_Interfaces { get { IEntity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as IEntity; return e as IParent; } } public Parent AdamEve_Classes { get { Entity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as Entity; return e as Parent; } } } So, is AdamEve_Interfaces faster than AdamEve_Classes? If so, by how much? And, if you know the answer, why?

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Codility-like sites for code golfs

    - by Adam Matan
    Hi, I've run into codility.com new cool service after listening to one of the recent stackoverflow.com podcasts. In short, it presents the user with a programming riddle to solve, within a given time frame. The user writes code in an online editor, and has the ability to run the program and view the standard output. After final submission, the user sees its final score and which tests failed him. Quoting Joel Spolsky: You are given a programming problem, you can do it in Java, C++, C#, C, Pascal, Python and PHP, which is pretty cool, and you have 30 minutes. And it gives you an editor in a webpage. And you've got to just start typing your code. And it's going to time you, basically you have to do it in a certain amount of time. And it actually runs your code and determines the performance characteristics of your code. It is intended for job interview screenings, but the idea seems very cool for code-golfs and for practicing new languages. Do you know if there's any proper open replacement? Adam

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Creative ways to punish (or just curb) laziness in coworkers

    - by FerretallicA
    Like the subject suggests, what are some creative ways to curb laziness in co-workers? By laziness I'm talking about things like using variable names like "inttheemplrcd" instead of "intEmployerCode" or not keeping their projects synced with SVN, not just people who use the last of the sugar in the coffee room and don't refill the jar. So far the two most effective things I've done both involve the core library my company uses. Since most of our programs are in VB.net the lack of case sensitivity is abused a lot. I've got certain features of the library using Reflection to access data in the client apps, which has a negligible performance hit and introduces case sensitivity in a lot places where it is used. In instances where we have an agreed standard which is compromised by blatant laziness I take it a step further, like the DatabaseController class which will blatantly reject any DataTable passed to it which isn't named dtSomething (ie- must begin with dt and third letter must be capitalised). It's frustrating to have to resort to things like this but it has also gradually helped drill more attention to detail into their heads. Another is adding some code to the library's initialisation function to display a big and potentially embarrassing (only if seen by a client) message advising that the program is running in debug mode. We have had many instances where projects are sent to clients built in debug mode which has a lot of implications for us (especially with regard to error recovery) and doing that has made sure they always build to release before distributing. Any other creative (ie- not StyleCop etc) approaches like this?

    Read the article

  • Cross-platform game development: ease of development vs security

    - by alcuadrado
    Hi, I'm a member and contributor of the Argentum Online (AO) community, the first MMORPG from Argentina, which is Free Software; which, although it's not 3D, it's really addictive and has some dozens of thousands of users. Really unluckily AO was developed in Visual Basic (yes, you can laugh) but the former community, so imagine, the code not only sucks, it has zero portability. I'm planning, with some friends to rewrite the client, and as a GNU/Linux frantic, want to do it cross-platform. Some other people is doing the same with the server in Java. So my biggest problem is that we would like to use a rapid development language (like Java, Ruby or Python) but the client would be pretty insecure. Ruby/Python version would have all it's code available, and the Java one would be easily decompilable (yes, we have some crackers in the community) We have consider the option to implement the security module in C/C++ as a dynamic library, but it can be replaced with a custom one, so it's not really secure. We are also considering the option of doing the core application in C++ and the GUI in Ruby/Python. But haven't analysed all it's implications yet. But we really don't want to code the entire game in C/C++ as it doesn't need that much performance (the game is played at 18fps on average) and we want to develop it as fast as possible. So what would you choose in my case? Thank you!

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • update variable based upon results from .NET backgroundworker

    - by Bruce
    I've got a C# program that talks to an instrument (spectrum analyzer) over a network. I need to be able to change a large number of parameters in the instrument and read them back into my program. I want to use backgroundworker to do the actual talking to the instrument so that UI performance doesn't suffer. The way this works is - 1) send command to the instrument with new parameter value, 2) read parameter back from the instrument so I can see what actually happened (for example, I try to set the center frequency above the max that the instrument will handle and it tells me what it will actually handle), and 3) update a program variable with the actual value received from the instrument. Because there are quite a few parameters to be updated I'd like to use a generic routine. The part I can't seem to get my brain around is updating the variable in my code with what comes back from the instrument via backgroundworker. If I used a separate RunWorkerCompleted event for each parameter I could hardwire the update directly to the variable. I'd like to come up with a way of using a single routine that's capable of updating any of the variables. All I can come up with is passing a reference number (different for each parameter) and using a switch statement in the RunWorkerCompleted handler to direct the result. There has to be a better way.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >