Search Results

Search found 6625 results on 265 pages for 'advice'.

Page 48/265 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • UEFI boot options gone

    - by user1797930
    I ran into some issues booting Windows after trying to make a complete backup of the disc. After searching for information about some of the error codes, I found advise to change some BIOS settings, but instead I thought I would just "restore defaults" to make sure all settings were set as originally intended. After doing so, all UEFI boot options except for "Windows Boot Manager" are gone. That means, including the CD/DVD drive, so I cannot even boot from a recovery DVD anymore - and as explained, Windows is not able to boot either. Do you have any advice? When I added a secondary drive originally, it was automatically added to the boot options menu. Even when removing and re-adding the drive physically, the option does not appear again. I have tried unplugging power, and hold down start button for 10 seconds, and boot afterwards - no change. It's a laptop so removing CMOS battery is not an option. I have read information that it is an issue with data removed from NVRAM, but I am unable to find a way to recover it. "Add new boot options" requires a path - but the CD/DVD was originally available without any CD's in the drive - so there is no path available to add the drive. I did try to open EFI shell, but it seems not to be embedded in the UEFI/BIOS. It just says "not found". I'm really lost here - any advice is appreciated.

    Read the article

  • Wine 1.6.2-Trying to switch to 32-bit Wineprefix from 64-bit Wine (Trusty 14.04). Can anyone help me out?

    - by AlternateSteve90
    Hello fellow Ubuntu users, I'm having a little trouble with Wine 1.6.1 and I was wondering if someone could help me out. I recently downloaded some 32-bit games that I'd wanted to try(BeamNG Drive and Bugbear's Next Car Game demo) and I had run into some trouble trying to get either of these games to run. So I came across a couple pieces of advice on the 'Net, one here on the Ubuntu community site and the other at BeamNG's forums, on how to create a 32-bit wineprefix on a 64-bit setup. I managed to be able to create the wine32 folder, but now I'm having trouble making it my default Wine setup. Anybody have any idea how I can do that? I'll post the URLs for said advice, btw: http://www.beamng.com/threads/1788-Installing-DRIVE-under-Linux-via-Wine How do I create a 32-bit WINE prefix? Here's what I've tried so far in the Terminal: steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' wine: created the configuration directory '/home/steven/wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10ee890, overlapped 0x10ee89c): stub wine: configuration in '/home/steven/wine32' has been updated. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=$HOME/.wine32 wine dxsetup.exe wine: created the configuration directory '/home/steven/.wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x103e2b8, overlapped 0x103e2d0): stub fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10fe890, overlapped 0x10fe89c): stub wine: configuration in '/home/steven/.wine32' has been updated. wine: cannot find L"C:\windows\system32\dxsetup.exe" steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win64 winecfgsteven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win32 winecfg wine: WINEARCH set to win32 but '/home/steven/.wine' is a 64-bit installation. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH=win32 wine wineboot steven@steven-HP-Pavilion-17-Notebook-PC:~$ So, yeah. TBH, though, I'm far from an expert and perhaps I've been going about it all the wrong way. In the meantime, I'll try to keep looking for solutions on my own, but if anybody can help me solve this dilemma, especially if anyone happens to own any of these two games in particular, I'd appreciate it. :)

    Read the article

  • SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password

    - by pinaldave
    In the last four days (April 21-24), I have received calls from friends who told me that they have got strange emails from me. To my surprise, I did not send them any emails. I was not worried until my wife complained that she was not able to find one of the very important folders containing our daughter’s photo that is located in our shared drive. This was alarming in my par, so I started a search around my computer’s folders. Again, please note that I am by no means a security expert. I checked my entire computer with virus and spyware, and strangely, there I found nothing. I tried to think what can cause this happening. I suddenly realized that there was a power outage in my area for about two hours during the days I have mentioned. Back then, my wireless router needed to be reset, and so I did. I had set up my WPA-PSK [TKIP] + WPA2-PSK [AES] very well. My key was very simple ( ‘SQLAuthority1′), and I never thought of changing it. (It is now replaced with a very complex one). While checking the Attached Devices, I found out that there was another very strange computer name and IP attached to my network. And so as soon as I found out that there is strange device attached to my computer, I shutdown my local network. Afterwards, I reconfigured my wireless router with a more complex security key. Since I created the complex password, I noticed that the user is no more connecting to my machine. Subsequently, I figured out that I can also set up Access Control List. I added my networked computer to that list as well. When I tried to connect from an external laptop which was not in the list but with a valid security key, I was not able to access the network, neither able to connect to it. I wasn’t also able to connect using a remote desktop, so I think it was good. If you have received any nasty emails from me (from my gmail account) during the afore-mentioned days, I want to apologize. I am already paying for my negligence of not putting a complex password; by way of losing the important photos of my daughter. I have already checked with my client, whose password I saved in SSMS, so there was no issue at all. In fact, I have decided to never leave any saved password of production server in my SSMS. Here is the tip SQL SERVER – Clear Drop Down List of Recent Connection From SQL Server Management Studio to clean them. I think after doing all this, I am feeling safe right now. However, I believe that safety is an illusion of many times. I need your help and advice if there is anymore I can do to stop unauthorized access. I am seeking advice and help through your comments. Reference : Pinal Dave (http://www.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • The Hunger Games for Aspiring IT Professionals

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} It seems that no one can escape the buzz around Hunger Games. And who could? Stephen King said it best in his review when he referred to the Collins’ novel as “a violent, jarring speed-rap of a novel that generates nearly constant suspense and may also generate a fair amount of controversy”. So what’s the tie in for IT? Let’s leave the dystopia of District 12 and come back to today’s reality. This is the world of radical IT paradigm shifts that haven’t been seen since Java was introduced in 1995. Everything you learned in school is probably outdated as of Friday. And everything you learned on Friday will probably change when you get to work on Monday. Nevertheless, we’re eager, we’re aspiring, we’re hungry to learn. While the challenges upon us may not rival the venomous bees (or ‘tracker jackers’) seen in this blockbuster, there are certainly obstacles to be found. In preparation, I leave you two pieces of advice - aside from avoiding werewolves… Learn the Cloud If you had asked me what to learn in 1995, I would have said, “Go learn Java”. But now my advice is “Go learn Java and then learn Cloud”. Cloud computing and Java go hand in hand. This is especially true for Oracle’s own Public Cloud which uses Java (via WebLogic 12c) as well as Oracle Database at its core foundation. Understanding the connotations of elasticity, scale, virtualization, and multi-tenancy, (to name just a few) requires a strong foundation in computer science and especially Java to get it right. Without Java, the Cloud is nothing more than a brittle application meagerly deployed on the internet. Get Social and Actively Participate And at all levels. Socializing your ideas internally is dreadfully important. And this means socializing and communicating your good ideas to lines of business, to architects, business analysts, developers, DBAs and Operations. But don’t forget to go external. Stay current by being on the lookout for blogs, tweets, webcasts, papers, podcasts and videos for your technology area. Be not just a subscriber but a participant in these channels as well. Attend industry and vendor sponsored events to learn from the experts – and seek out opportunities to stay connected with those that are smarter than you. You’ll gain more understanding if you participate actively. At the same time you’ll make friends (and allies) and you’ll be glad you did. Tell help you get social and actively participate [while learning the Cloud] here are a couple of pointers for you: See our website on Cloud and Fusion Middleware Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook Find us at one of our key events Meanwhile, happy IT hunger games!

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • How do you educate your teammates without seeming condescending or superior?

    - by Dan Tao
    I work with three other guys; I'll call them Adam, Brian, and Chris. Adam and Brian are bright guys. Give them a problem; they will figure out a way to solve it. When it comes to OOP, though, they know very little about it and aren't particularly interested in learning. Pure procedural code is their MO. Chris, on the other hand, is an OOP guy all the way -- and a cocky, condescending one at that. He is constantly criticizing the work Adam and Brian do and talking to me as if I must share his disdain for the two of them. When I say that Adam and Brian aren't interested in learning about OOP, I suspect Chris is the primary reason. This hasn't bothered me too much for the most part, but there have been times when, looking at some code Adam or Brian wrote, it has pained me to think about how a problem could have been solved so simply using inheritance or some other OOP concept instead of the unmaintainable mess of 1,000 lines of code that ended up being written instead. And now that the company is starting a rather ambitious new project, with Adam assigned to the task of getting the core functionality in place, I fear the result. Really, I just want to help these guys out. But I know that if I come across as just another holier-than-thou developer like Chris, it's going to be massively counterproductive. I've considered: Team code reviews -- everybody reviews everybody's code. This way no one person is really in a position to look down on anyone else; besides, I know I could learn plenty from the other members on the team as well. But this would be time-consuming, and with such a small team, I have trouble picturing it gaining much traction as a team practice. Periodic e-mails to the team -- this would entail me sending out an e-mail every now and then discussing some concept that, based on my observation, at least one team member would benefit from learning about. The downside to this approach is I do think it could easily make me come across as a self-appointed expert. Keeping a blog -- I already do this, actually; but so far my blog has been more about esoteric little programming tidbits than straightforward practical advice. And anyway, I suspect it would get old pretty fast if I were constantly telling my coworkers, "Hey guys, remember to check out my new blog post!" This question doesn't need to be specifically about OOP or any particular programming paradigm or technology. I just want to know: how have you found success in teaching new concepts to your coworkers without seeming like a condescending know-it-all? It's pretty clear to me there isn't going to be a sure-fire answer, but any helpful advice (including methods that have worked as well as those that have proved ineffective or even backfired) would be greatly appreciated. UPDATE: I am not the Team Lead on this team. Chris is. UPDATE 2: Made community wiki to accord with the general sentiment of the community (fancy that).

    Read the article

  • Who owns the IP rights of the software without written employment contract? Employer or employee? [closed]

    - by P T
    I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the "shop rights" to use "a copy" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new "greedy" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: "Who owns the code without contract? Me or the company A? (in FL, US)" Detailed questions: I am familiar with the "shop rights", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B? Can applying for a copyright of the code at http://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense? Could adding a note specifying the company A as exclusive license holder clarify the arrangements? If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone). It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally? I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims? My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic. Thank you, PT

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • A programmer who doesn't get to program - where to turn? [closed]

    - by Just an Anon
    I'm in my mid 20's, and have been working as a full time programmer / developer for the last ~6 years, with several years of part-time freelancing before this, and three straight years of freelancing in the middle of this short career. I work mostly with PHP and the Drupal framework. By and large, I focus on programming custom pieces of functionality; these, of course, vary greatly from project to project. I've got years of solid experience with OOP (have done some Java & C# years ago, too) including intensive experience with front-end development, and even some design work. I've lead small teams (2-4 people) of developers. And of course, given the large amount of freelancing, I've got decent project- & client-management skills. My problem is staying motivated at any place of employment. In the time mentioned I've worked (full-time) at six local companies. The longest I've stayed at any company was just over a year. I find that I'll get hired and be very excited and motivated for the first few months, but the work quickly gets "stale." By that I mean that the interesting components (ie. the programming) get done, and the rest of the work turns into boring cleanup (move a button, add text, change colours, add a field). I don't get challenged, and I don't feel like I'm learning anything new. This happens repeatedly time and time again, and I always end up leaving for either a new opportunity, or to freelance. I'm wondering if perhaps I've painted myself into a corner with the rather niche work market (although with very high demand and good compensation) and need to explore other career choices. Another possibility is that I may be choosing the wrong places of employment, mostly small agencies, and need to look into working for a larger, more established firm. I find programming, writing code, and architecting solutions very rewarding. When I'm working on an interesting problem I lose all sense of time and 14-16 hours can fly by like minutes. I get the same exciting feeling when I'm doing high-level planning of a complex system, breaking up the work and figuring out how everything will tie-in together. I absolutely hate doing small, "stupid" changes that pose no challenge, yet seem to make up more and more of my work. I want to find a workplace where I will get to work on such tasks, be challenged, and improve in all areas of product development. This maybe a programming job, management, architecture of desktop apps, or may be managing a taco stand on a beach in Mexico - I don't know, and I need some advice and real-world feedback. What are some job areas worth exploring? The requirements are fairly simple: working with computers interacting with others challenging decent pay (I'm making just short of 90k / year with a month of vacation & some benefits, and would like to stay in this range, but am willing to take a temporary cut in pay for a more interesting position) Any advice would be much appreciated!

    Read the article

  • TeamCity stopped working once I added NUnit to the mix

    - by Dave
    I'm struggling a lot trying to get our build server going. I am currently running tests in a Windows XP virtual machine, and have installed TeamCity v5.0.3, build 10821. I am using NUnit v2.5.3. I finished the initial setup with TeamCity without any issues at all, provided that I use the sln2008 build runner that makes the entire process almost brainless. It's really quite nice that way, and very satisfying to see your first successful automated build. Now it's time to kick it up a notch and I wanted to get NUnit working. I keep the NUnit 2.5.3 assemblies in an external libs folder in SVN, so I checked that out onto the test system. I selected NUnit 2.5.3 from the build runner options, as the online instructions had recommended. But when I build, I get the following error: Window1.xaml.cs(14,7): error CS0246: The type or namespace name ‘NUnit’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘Test’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘TestAttribute’ could not be found (are you missing a using directive or an assembly reference?) Everything compiles great in the IDE. From finding blog posts and submitting comments, I got some advice and confirmed the following: I have the HintPath value set properly in my project file (points to the external lib) I can also do a full Release and Debug build from the command line using msbuild I have tried do use the NUnit installer so nunit.framework.dll gets registered into the GAC I have changed the build agent's logon account to be a user on the test system, rather than LOCAL SYSTEM. Nothing seems to help... can anyone else here offer me some advice on what to try next?

    Read the article

  • Update iPhone UIProgressView during NSURLConnection download.

    - by Scott Pendleton
    I am using this code: NSURLConnection *oConnection=[[NSURLConnection alloc] initWithRequest:oRequest delegate:self]; to download a file, and I want to update a progress bar on a subview that I load. For that, I use this code: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [oReceivedData appendData:data]; float n = oReceivedData.length; float d = self.iTotalSize; NSNumber *oNum = [NSNumber numberWithFloat:n/d]; self.oDPVC.oProgress.progress = [oNum floatValue]; } The subview is oDPVC, and the progress bar on it is oProgress. Setting the progress property does not update the control. From what I have read on the Web, lots of people want to do this and yet there is not a complete, reliable sample. Also, there is much contradictory advice. Some people think that you don't need a separate thread. Some say you need a background thread for the progress update. Some say no, do other things in the background and update the progress on the main thread. I've tried all the advice, and none of it works for me. One more thing, maybe this is a clue. I use this code to load the subview during applicationDidFinishLaunching: self.oDPVC = [[DownloadProgressViewController alloc] initWithNibName:@"DownloadProgressViewController" bundle:nil]; [window addSubview:self.oDPVC.view]; In the XIB file (which I have examined in both Interface Builder and in a text editor) the progress bar is 280 pixels wide. But when the view opens, it has somehow been adjusted to maybe half that width. Also, the background image of the view is default.png. Instead of appearing right on top of the default image, it is shoved up about 10 pixels, leaving a white bar across the bottom of the screen. Maybe that's a separate issue, maybe not.

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Method for finding memory leak in large Java heap dumps

    - by Rickard von Essen
    I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome. About our situation: Heap dumps are larger than 1 GB We have heap dumps from 5 occasions. We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage. The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all. Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails. We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly. The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM) Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?

    Read the article

  • JQUERY: Setting Active state on animated menu tabs

    - by Tony
    I have image sprites that use JQuery to set the BG position on mouseover and mouseout events. When I set the active state BG position using JQUERY it works fine until I move my cursor away from the active 'tab' which then fires the mouseout event animation. What I want is the mouseClick event to stop the animation on the active tab but still allow the animation effect to work on the other tabs, and when another tab is clicked for the active state to be removed from the current tab to the new 'active' tab. JQuery $(function(){ /* This script changes main nav links hover state*/ $('.navigation a') .css( {backgroundPosition: "-1px -120px"} ) .mouseover(function(){ $(this).stop().animate({backgroundPosition:"(-1px -240px)"}, {duration:400}) }) .mouseout(function(){ $(this).stop().animate({backgroundPosition:"(-1px -120px)"}, {duration:400, complete:function (){ $(this).css({backgroundPosition: "-1px -120px"}) }}) }) }); $(document).ready(function(){ $("a").click(function(){ $(this).css({backgroundPosition: "-1px -235px"}); }); }); HTML <ul class="navigation"> <li><a href="#index" tabindex="10" title="Home" id="homeButton"></a></li> <li><a href="#about" tabindex="20" title="About us" id="aboutButton"></a></li> <li><a href="#facilities" tabindex="30" title="Our facilities and opening Times" id="facilitiesButton"></a></li> <li><a href="#advice" tabindex="40" title="Advice and useful links" c id="adviceButton"></a></li> <li><a href="#find" tabindex="50" title="How to find Us" id="findButton"></a></li> <li><a href="#contact" tabindex="60" title="Get in touch with us" id="contactButton"></a></li> </ul> You can see what I've got so far here Thanks in advance for any help

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

  • Trouble getting started with Spring Roo and GWT

    - by Abdel Olakara
    Hi all, I am trying to get started with SpringRoo and GWT after seeing the keynote.. unfortunately I am stuck at this issue. I successfully created the project using Roo and added the persistence, the entities and when I perform the command "perform package" I get this error: 23/5/10 12:10:13 AM AST: [ERROR] ApplicationEntityTypesProcessor cannot be resolved 23/5/10 12:10:13 AM AST: [ERROR] ApplicationEntityTypesProcessor cannot be resolved to a type 23/5/10 12:10:13 AM AST: [WARN] advice defined in org.springframework.mock.staticmock.AnnotationDrivenStaticEntityMockingControl has not been applied [Xlint:adviceDidNotMatch] 23/5/10 12:10:13 AM AST: [WARN] advice defined in org.springframework.mock.staticmock.AbstractMethodMockingControl has not been applied [Xlint:adviceDidNotMatch] 23/5/10 12:10:13 AM AST: Build errors for helloroo; org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:aspectj-maven-plugin:1.0:compile (default) on project helloroo: Compiler errors : error at import tp.gwt.request.ApplicationEntityTypesProcessor; I see this in the Maven console and cannot complete the build..I know there is some jar missing but how and why? because I downloaded all the latest version including GWT milestone release. Any idea why this error is occurring? How do I resolve this issue? Thanks in Advance, Abdel Olakara

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Creating and publishing exel file in MOSS 2007 using data from SQL sever.

    - by Diomos
    Hello, I need help in this matter: We have a template of exel file in which all calculations are already set. User can request a 'report'. Idea is to create a button on our site (SharePoint portal). After clicking on it a new exel file is generated. This means to get actual data from database (SQL server 2005 SP2), import them into template, let all calculations to generate proper data and then allow user to see this file. For now it's enough to publish final exel file in document library. I am quite new in WSS 3.0 and MOSS 2007 and I need some advice in what can be the best solution. Looks like a quite complex task for me. Is there some direct way how to accomplish this? Or maybe I need one tool to get data from database and to import this data into exel file (SSRS?) and other tool to publish it in document library (MOSS7 Exel services?). I heard something about PerformancePoint Server 2007, is this a way to follow? Thanks forward for any advice!

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >