Search Results

Search found 1517 results on 61 pages for 'aspect ratio'.

Page 49/61 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • ADF page security - the untold password rule

    - by ankuchak
    I'm kinda new to Oracle ADF. So, in this blog post I'm going to share something with you that I faced (and recovered from) recently. Initially I thought if I should at all put a blog post on this, because it's totally simple. Still, simplicity is a relative term. So without wasting further time, let's kick off.    I was exploring the ADF security aspect to secure a page through html basic authentication. The idea is very simple and the credential store etc. come into picture. But I was not able to run a successful test of this phenomenally simple thing even after trying for over 30 minutes. This is what I did.   I created a simple jsf page and put a panel in it. And I put a simple el to show the current user name.  Next I created a user that I should test with. I named the password as myuser, just to keep it simple. Then I created an enterprise role and mapped the user that I just created. Then I created an application role and mapped the enterprise role to it. Then I mapped the resource, the simple jsf page in this case, to this application role. This way, only users with the given application role can only access this page (as if you didn't know this duh!).  Of course, I had to create the page definition for the page before I could map it to an application role. What else! done! Then I hit the run menu item and it all went well...   Until... I got this message. I put the correct credentials repeatedly 2-3 times. Still I got the same error. Why? I didn't get any error message during the deployment. nope.  Then, as I said before, I spent over 30 minutes trying different things out, things like mapping only the user(not the role) to the page, changing the context root etc. Nothing worked!  Then of course, I bothered to look at the logs and found this. See the first red line. That says it all. So the problem was with that password. The password must have at least one special character and one digit in it. I think I was misled by the missing password hint/rule and the fact that the deployment didn't fail even if the user was not created properly. Well, yes, I agree that I was fool enough not to look at the logs.  Later I changed the password to something like myuser123# . And it worked. I hope it helped.

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Spring AOP AfterThrowing vs. Around Advice

    - by whiskerz
    Hey there, when trying to implement an Aspect, that is responsible for catching and logging a certain type of error, I initially thought this would be possible using the AfterThrowing advice. However it seems that his advice doesn't catch the exception, but just provides an additional entry point to do something with the exception. The only advice which would also catch the exception in question would then be an AroundAdvice - either that or I did something wrong. Can anyone assert that indeed if I want to catch the exception I have to use an AroundAdvice? The configuration I used follows: @Pointcut("execution(* test.simple.OtherService.print*(..))") public void printOperation() {} @AfterThrowing(pointcut="printOperation()", throwing="exception") public void logException(Throwable exception) { System.out.println(exception.getMessage()); } @Around("printOperation()") public void swallowException(ProceedingJoinPoint pjp) throws Throwable { try { pjp.proceed(); } catch (Throwable exception) { System.out.println(exception.getMessage()); } } Note that in this example I caught all Exceptions, because it just is an example. I know its bad practice to just swallow all exceptions, but for my current use case I want one special type of exception to be just logged while avoiding duplicate logging logic.

    Read the article

  • Is this an acceptable UI design decision?

    - by DVK
    OK, while I'm on record as stating that StackExchange UI is pretty much one of the best websites and overall GUIs that I have ever seen as far as usability goes, there's one particular aspect of the trilogy that bugs me. For an example, head on to http://meta.stackoverflow.com . Look at the banner on top (the one that says "reminder -- it's April Fool's Day depending on your time zone!"). Personally, I feel that this is a "make the user do the figuring out work" anti-pattern (whatever it's officially called) - namely, instead of making your app smart enough to only present a certain mode of operations in the conditions when that mode is appropriate, you simply turn on the mode full on and put an explanation to the user of why the mode is on when it should not be (in this particular example, the mode is of course displaying the unicorn gravatars starting with 00:00 in the first timezone, despite the fact that some users still live in March 31st). The Great Recalc was also handled the same way - instead of proactively telling the user "your rep was changed from X to Y" the same nearly invisible banner was displayed on meta. So, the questions are: Is there such an official anti-pattern, and if so,m what the heck do i call it? Do you have any other well-known examples of such design anti-pattern? How would you fix either the SO example I made or you your own example? Is there a pattern of fixing or must it be a case-by-case solution?

    Read the article

  • How to write an iphone application to control a device that exposes a telnet api

    - by MAC
    Hi! I have to write an iphone application that controls a device. This device exposes a telnet based interface. The application should ideally have user access control and customizability for each user. I was thinking of writing C++ classes that would communicate with the device using sockets. This functionality can then be exposed through web-services that can be called by the iphone application. However as i looked into it deeper, the api allows you to register for events using telnet and then you can receive notification when those events occur. That kinda put a spanner in the works for me. I for one dont know a "push" scenario can work with webservices. First off i have never programmed for the iphone so far. So i am not really sure what can be done. So i was thinking if instead of having a webserver to go through, why not have the application independently running on the iphone, directly communicating with the device using sockets. The question though is, is that possible and second i am thinking it would raise a security aspect. First we could control security as everything was going through our central server. Is there a way to handle security (in the sense who has access to the device) without having a central server. I am sorry that this seems like an unorganized post, but iam trying to brainstorm here. Looking forward to hear your opinions.

    Read the article

  • Ninject 2 + ASP.NET MVC 2 Binding Types from External Assemblies

    - by Malkier
    Hi, I'M just trying to get started with Ninject 2 and ASP.NET MVC 2. I have followed this tutorial http://www.craftyfella.com/2010/02/creating-aspnet-mvc-2-controller.html to create a Controller Factory with Ninject and to bind a first abstract to a concrete implementation. Now I want to load a repository type from another assembly (where my concrete SQL Repositories are located) and I just cant get it to work. Here's my code: Global.asax.cs protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); } Controller Factory: public class Kernelhelper { public static IKernel GetTheKernel() { IKernel kernel = new StandardKernel(); kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); return kernel; } } public class MyControllerFactory : DefaultControllerFactory { private IKernel kernel = Kernelhelper.GetTheKernel(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { return controllerType == null ? null : (IController)kernel.Get(controllerType); } } In "MyAssembly" there is a Module: public class ExampleConfigModule : NinjectModule { public override void Load() { Bind<Domain.CommunityUserRepository>().To<SQLCommunityUserRepository>(); } } Now when I just slap in a MockRepository object in my entry point it works just fine, the controller, which needs the repository, works fine. The kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); also does its job and registers the module but as soon as I call on the controller which needs the repository I get an ActivationException from Ninject: No matching bindings are available, and the type is not self-bindable. Activation path: 2) Injection of dependency CommunityUserRepository into parameter _rep of constructor of type AccountController 1) Request for AccountController Can anyone give me a best practice example for binding types from external assemblies (which really is an important aspect of Dependency Injection)? Thank you!

    Read the article

  • C# Wholesale Order form - textboxes in Gridviews in Repeater

    - by tnriverfish
    I'm building a wholesale order form on a website. The current plan is to... -get an ArrayList of DepartmentUnits -a DepartmentUnit has various attributes like "deptId", "description" and its own ArrayList of StoreItems -The StoreItems have attached ArrayList of various SizeOptions -The SizeOptions have an inventory count integer along with their description -Planning on putting an asp:Repeater on the page that has an asp:GridView in it -Each DepartmentUnit will have its own GridView -EachStore item will have a row in the GridView -Each SizeOption will have a TextBox in the row (approximately 10 options) -Each inventory count will be watermarked over the size option textbox The question becomes how will I then collect all this information correctly once the form has been filled out? I don't like the idea of putting all this information in an update panel and then posting back each time a GridView row or worse one of the row's textboxes changes. Any advice on putting a single save button on the page and looping through each Repeater item - and each GridViewRow - and each textbox - to get all the values entered? Better to try collecting all the items added in a single table at the bottom of the page and updating the string with jquery each time a text box is modified? Then just looping through the new table when saved? Not sure I know how to loop through that table though - updating if quantity is changed might be a bear too. If it considerably simplifies the process I could just remove the Repeater aspect and put separate GridViews on separate pages. Thanks!

    Read the article

  • Update table columns bound to NSArrayController

    - by Loz
    Hi, I'm fairly new to the world of bindings in cocoa, and I'm having some troubles (perhaps/probably due to a misunderstanding). I have a singleton that contains an NSMutableArray called plugins, containing objects of class Plugin. It has a method called loadPlugins which adds objects to the plugins array. This may be called at any point. It's been added as an instance in Interface Builder. Also in IB is an NSObjectController, whose content outlet is connected to the singleton. There is also an NSArrayController, whose contentArray is bound to the NSObjectController (controller key is 'selection', model key path is 'plugins', object class name is 'Plugin'). And finally I have a table view with 2 columns, the values of which are bound to the NSArrayController's arrangedObjects, using keys of properties in the Plugin class. So far so standard (as far as I can tell from tutorials at least). My trouble is that when the loadPlugins method is called in the singleton, and objects are added to the plugins array, the table doesn't update to show the objects (unless loadPlugins is called before the nib is loaded). -reloadData called on the tableView doesn't do anything. Is there a way to tell the NSArrayController that the referenced array has been updated? I understand there is the -add: method for NSArrayController, which could be used in the loadPlugins, but this isn't desirable as I want to keep the singleton totally separate from the display aspect. This seems related to: http://stackoverflow.com/questions/1623396/refresh-cocoa-binding-nsarraycontroller-combobox The line: "editing the array behind the controller's back" seems to perhaps pinpoint the problem, but I would hope that it would be possible to have the singleton not know about the controller. Thanks in advance.

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Do connection string DNS lookups get cached?

    - by joshcomley
    Suppose the following: I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network. I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network. Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2. So, I update the DNS for database.mywebsite.com to point to 222.222.2.2. Will all the applications and computers running them have cached the old resolved IP address? I'm assuming they will have. Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.

    Read the article

  • Forking in PHP on Windows

    - by Doug Kavendek
    We are running PHP on a Windows server (a source of many problems indeed, but migrating is not an option currently). There are a few points where a user-initiated action will need to kick off a few things that take a while and about which the user doesn't need to know if they succeed or fail, such as sending off an email or making sure some third-party accounts are updated. If I could just fork with pcntl_fork(), this would be very simple, but the PCNTL functions are not available in Windows. It seems the closest I can get is to do something of this nature: exec( 'php-cgi.exe somescript.php' ); However, this would be far more complicated. The actions I need to kick off rely on a lot of context that already will exist in the running process; to use the above example, I'd need to figure out the essential data and supply it to the new script in some way. If I could fork, it'd just be a matter of letting the parent process return early, leaving the child to work on a few more things. I've found a few people talking about their own work in getting various PCNTL functions compiled on Windows, but none seemed to have anything available (broken links, etc). Despite this question having practically the same name as mine, it seems the problem was more execution timeout than needing to fork. So, is my best option to just refactor a bit to deal with calling php-cgi, or are there other options? Edit: It seems exec() won't work for this, at least not without me figuring some other aspect of it, as it waits until the call returns. I figured I could use START, sort of like exec( 'start php-cgi.exe somescript.php' );, but it still waits until the other script finishes.

    Read the article

  • What is the best solution for a blog with e-commerce?

    - by Yaron
    Hi! While there are loads of Joomla vs Wordpress posts out there, none address which is best suited to a blog with an attached online store. I anticipate having about 40 or so articles and want the full set of blogging features- tags, comments, talkback, sharing options, SEO functionality, support for ads etc. The online store will come later. I'll be the only contributor but I want to keep extensibility in mind with respect to multiple contributors, possible social network integration, and expanded categories of content down the line. I'm a developer with a lot of experience with C# and SQL Server but very little with web development, mostly in ASP.NET and basic HTML/CSS. I'm keen on learning as much as I can but don't want to reinvent the wheel or put the project on hold as I get up the learning curve. I have concerns with both Joomla and Wordpress. Joomla seems like the most extensible option but but all of the articles on blogging with Joomla I've read complain of shortcomings I'd rather not trade off. Wordpress on the other hand seems ideal for the blog aspect of the project, but a lot of people on this site recommend avoiding it for much more than that, including e-commerce. I really don't want to hack together a hybrid. Advice is much appreciated, thanks!

    Read the article

  • Ruby: add custom properties to built-in classes

    - by dreftymac
    Question: Using Ruby it is simple to add custom methods to existing classes, but how do you add custom properties? Here is an example of what I am trying to do: myarray = Array.new(); myarray.concat([1,2,3]); myarray._meta_ = Hash.new(); # obviously, this wont work myarray._meta_['createdby'] = 'dreftymac'; myarray._meta_['lastupdate'] = '1993-12-12'; ## desired result puts myarray._meta_['createdby']; #=> 'dreftymac' puts myarray.inspect() #=> [1,2,3] The goal is to construct the class definition in such a way that the stuff that does not work in the example above will work as expected. Update: (clarify question) One aspect that was left out of the original question: it is also a goal to add "default values" that would ordinarily be set-up in the initialize method of the class. Update: (why do this) Normally, it is very simple to just create a custom class that inherits from Array (or whatever built-in class you want to emulate). This question derives from some "testing-only" code and is not an attempt to ignore this generally acceptable approach.

    Read the article

  • Ruby: add custom properties to built-in classes

    - by dreftymac
    Question: Using Ruby it is simple to add custom methods to existing classes, but how do you add custom properties? Here is an example of what I am trying to do: myarray = Array.new(); myarray.concat([1,2,3]); myarray._meta_ = Hash.new(); # obviously, this wont work myarray._meta_['createdby'] = 'dreftymac'; myarray._meta_['lastupdate'] = '1993-12-12'; ## desired result puts myarray._meta_['createdby']; #=> 'dreftymac' puts myarray.inspect() #=> [1,2,3] The goal is to construct the class definition in such a way that the stuff that does not work in the example above will work as expected. Update: (clarify question) One aspect that was left out of the original question: it is also a goal to add "default values" that would ordinarily be set-up in the initialize method of the class. Update: (why do this) Normally, it is very simple to just create a custom class that inherits from Array (or whatever built-in class you want to emulate). This question derives from some "testing-only" code and is not an attempt to ignore this generally acceptable approach.

    Read the article

  • Disassembling with python - no easy solution?

    - by Abc4599
    Hi, I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code. I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it. I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable. The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows. At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not. This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs. I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments. The CTypes sort of ultimate-death-to-your-python crashes. So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm. If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :) Thanks ahead.

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • How does a java web project architecture look like without EJB3 ?

    - by Hendrik
    A friend and I are building a fairly complex website based on java. (PHP would have been more obvious but we chose for java because the educational aspect of this project is important to us) We have already decided to use JSF (with richfaces) for the front end and JPA for the backend and so far we have decided not to use EJB3 for the business layer. The reason we've decided not to use EJB3 is because - and please correct me if I am wrong - if we use EJB3 we can only run it on a full blown java application server like jboss and if we don't use EJB3 we can still run it on a lightweight server like tomcat. We want to keep speed and cost of our future web server in mind. So far i've worked on two JEE projects and both used the full stack with web business logic factories/persistence service entities with every layer a seperate module. Now here is my question, if you dont use EJB3 in the business logic layer. What does the layer look like? Please tell what is common practice when developing java web projects without ejb3? Do you think business logic layer can be thrown out altogether and have business logic in the backing beans? If you keep the layer, do you have all business methods static? Or do you initialize each business class as needed in the backing beans in every session as needed?

    Read the article

  • flex actionscript not uploading file to PHP page HELP!

    - by Rees
    hello, please help! I am using actionscript 3 with flex sdk 3.5 and PHP to allow a user to upload a file -that is my goal. However, when I check my server directory for the file... NOTHING is there! For some reason SOMETHING is going wrong, even though the actionscript alerts a successful upload (and I have even tried all the event listeners for uploading errors and none are triggered). I have also tested the PHP script and it uploads SUCCESSFULLY when receiving a file from another PHP page (so i'm left to believe there is nothing wrong with my PHP). However, actionscript is NOT giving me any errors when I upload -in fact it gives me a successful event...and I know the my flex application is actually trying to send the data because when I attempt to upload a large file, it takes significantly more time to alert a "successful" event than when I upload a small file. I feel I have debugged every aspect of this code and am now spent. pleaseeee, anyone, can you tell me whats going wrong?? or at least how I can find out whats happening? -I'm using flash bugger and I'm still getting zero errors. private var fileRef:FileReference = new FileReference(); private var flyerrequest:URLRequest = new URLRequest("http://mysite.com/sub/upload_file.php"); private function uploadFile():void{ fileRef.browse(); fileRef.addEventListener(Event.SELECT, selectHandler); fileRef.addEventListener(Event.COMPLETE, completeHandler); } private function selectHandler(event:Event):void{ fileRef.upload(flyerrequest); } private function completeHandler(event:Event):void{ Alert.show("uploaded"); }

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >