Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 403/933 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Which version of Windows Server 2008?

    - by dragonmantank
    One of the projects I'm working on is looking like we're going to need to migrate from CentOS 5.4 over to something else (we need to run Postgresql 8.3+, and CentOS/RHEL only support 8.1), and one of the options will be Windows Server. Since 2008 R2 is out that's what I'm looking at. I'll need to run Postgres and Tomcat and don't really require anything that Windows has like IIS (if I can run Server Core, even better!). The other kicker is it will be virtualized through VMWare ESXI 4.0 so that we have three separate boxes: development, Quality, and Production servers. From a licensing standpoint though, and I good enough with just the Web Server edition? Am I right in assuming that will be three licenses? Or should I just jump up to Enterprise so that I get 4 VM licenses?

    Read the article

  • Finding X on Excel scatter plot/trend line

    - by Wilka
    If I have some data in an scatter plot in Excel, e.g. X Y 1 10 2 20 3 30 4 40 5 50 and I want to find the Y value for X = 10, or X=3.5, or whatever (obviously this is a simplified example) I've been doing the following: Add a trend-line to the scatter plot data Format the trend-line to one that fits the data (linear in this case) Display the equation for the trend-line on the chart Type the equation into an empty cell, replacing x with a cell reference. E.g. "=10*A1" then put my X value into the cell A1 Is there a better way of doing this with Excel? It's quite a few steps, and fairly repetitive. Or maybe Excel is just a poor choice of application for doing this? (I'm using Excel 2007)

    Read the article

  • Set Up Anti-Brick Protection to Safeguard and Supercharge Your Wii

    - by Jason Fitzpatrick
    We’ve shown you how to hack your Wii for homebrew software, emulators, and DVD playback, now it’s time to safeguard your Wii against bricking and fix some annoyances—like that stupid “Press A” health screen. The thing about console modding and jailbreaking—save for the rare company like Amazon that doesn’t seem to care—is companies will play a game of cat and mouse to try and knock modded console out of commission, undo your awesome mods, or even brick your device. Although extreme moves like bricktacular-updates are rare once you modify your device you have to be vigilante in protecting it against updates that could hurt your sweet setup. Today we’re going to walk you through hardening your Wii and giving it the best brick protection available Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Snowy Christmas House Personas Theme for Firefox The Mystic Underground Tunnel Wallpaper Ubunchu! – The Ubuntu Manga Available in Multiple Languages Breathe New Life into Your PlayStation 2 Peripherals by Hooking Them Up to Your Computer Move the Window Control Buttons to the Left Side in Windows Fun and Colorful Firefox Theme for Windows 7

    Read the article

  • Apache (XAMPP 1.8.0) access.log/Intrusion Detection Concern

    - by Andy Holaday
    [I originally posted on SO but it earned me a Tumbleweed badge. This looks like a better venue for the question.] I have Apache (XAMPP 1.8.0) running on Vista Pro x64. A couple times now I have seen a pattern like the example below in access.log. Concerning is the "attack" seems to somehow shift from a public IP to a valid private IP on my network (happens to be the WAN address of one of my routers). Two questions: How is this possible, and what happens if the "attacker" stumbles on a valid request? I've googled this to no avail. 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.4/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.5-rc1/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.2.6/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.5-rc2/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.6-rc2/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.6-rc1/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.5-pl1/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:19:59 -0400] "GET /phpMyAdmin-2.5.7/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:20:01 -0400] "GET /phpMyAdmin-2.5.7-pl1/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:20:02 -0400] "GET HTTP/1.1" 400 1060 "-" "-"

    Read the article

  • Is there a security risk for allowing people to set their DNS so their own subdomains can be route to my server?

    - by DantheMan
    Lets say that I have a web application, built in Django and deployed with Nginx. Is it a good idea to offer a service that allows customers to request that a subdomain can be pointed at it. I figured this: If I dont allow this, then some companies wont want to access the service from http://mydjangoappmadeupname.com/bigcorporation/ They would rather access it through http://service.bigcorporation.com That would effectively mask that they are using an outside resource. Is there a significant risk that I am overlooking? Also do you think it would be easier to just set things up in Django to handle it, allowing Nginx to accept all domains and then pushing them to Django which would filter out if they are allowed or not, or would it be better to just update my Nginx log each time a client wanted this changed?

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • File Saving Sometimes Fails

    - by YellPika
    When I attempt to save files, it sometimes (randomly) fails. In Blender, I sometimes get "Version Backup Failed: File Saved With @". In Visual Studio, building sometimes fails with an error message indicating that the target file/exe cannot be overwritten. If I wait a bit, I can save fine. It's almost as if programs are taking an abnormal amount of time to 'let go' of the files. What could be causing this behaviour? This seems to be caused by Windows Live Mesh monitoring my files, and locking them whenever it uploads the new versions (BAD considering the amount of times I save my files, even redundantly). Any suggestions to work around this behaviour? Should I switch to a better service to sync my files?

    Read the article

  • Can I make Terminal.app open different profiles with shortcut like iTerm?

    - by Patrick
    I've been using iTerm for a while, but I switched back to Terminal.app in Snow Leopard due to the Visor plugin. Most of thing works pretty well after I switched to Terminal.app, except in iTerm, I can create different profiles and assign each profile with key shortcut. The advantage of this setup is that I can have non-transparency and transparency profile. If I need to look up a command on the background, I can start a transparency shell, and vice versa. I was thinking to use Mac built-in "Application shortcut" in System preference. Though, I realize in Terminal.app, the "New Window" and "New Tab" have the same profile names, so the Application shortcut is confused and I can't assign a shortcut to it... Any thought on how to change menu name on Terminal (or even better, any application?) Thanks for your help.

    Read the article

  • NCurses, scrolling of multiline items, "current item" pointer and "selected items"

    - by mjf
    I am looking for hints/ideas for the best (most effective) way on how to scroll multi-line items as well as emphasizing of the "current item" and "selected items" such as: 1 FOO ITEM 1 Foo sub-item 2 Foo sub-item 3 Foo sub-item 2 BAR ITEM 1 Bar sub-item 3 BAZ ITEM 1 Baz sub-item 2 Baz sub-item 4 RAB ITEM 5 ZZZ ITEM 1 Zzz sub-item 2 Zzz sub-item 3 Zzz sub-item 4 Zzz sub-item using NCurses (some combination of windows, sub-windows, pads, copywin? Uff! In fact, the lines could exceed the stdscr's width so that possibility to scroll left/right would be also nice - pads?)... The whole items (including the sub-items) are supposed to be emphasized as full-width window/pad areas. The "current item" (including it's set of lines) should be emphasized (i.e. using A_BOLD), selected set of items of choice (including the set of lines for each the selected item) should be emphasized in another way (i.e. using A_REVERSE). What would you choose to cope with it the most effective NCurses way? (The less redrawals/refreshes the better and terminal is supposed to have the ability to change it's size - such as XTerm running under "floating window" management.) Thank you for your ideas (or perhaps some piece of code where something similar is already solved - I was not able to find anything helpful on the Internet. I mean I am not going to copy/paste foreign code but programming NCurses properly is still somehow difficult to me). P.S.: Would you suggest to "smooth-scroll" +1/-1 screen line or rather "jump-scroll" +lines/-lines of the items? (I personally prefer the latter one.) Sincerely, -- mjf

    Read the article

  • Script to kill process at logoff doesn't execute until process is dead?

    - by robertc
    We have a program that, due to memory leaks in some of the screens, doesn't exit cleanly when the user quits. The problem is that this blocks the normal logoff procedure - you select logout and a few processes disappear but the user doesn't actually log off. Since I'm unable to fix the program, I thought I'd use a script run at logoff to kill the process. I've verified the script kills the process if I run it by double clicking and have added the script to Windows Settings - Scripts - Logoff on my machine in gpedit. Unfortunately it seems that the logoff scripts don't get run until all the processes have died, so it never runs. Is there a way to make the logoff scripts run at an earlier point in the process? Or is there a better approach to the issue?

    Read the article

  • Scriptable FTPS client able to send Keep Alive to control port?

    - by schultkl
    We need a FTP client that satisfies the following constraints: Windows Command-line scriptable, so we can automate it...sorry, FileZilla (?) FTPS, as it seems to perform better than SFTP The ability to send KeepAlive commands to the FTPS control port No passwords sent on the command line...sorry, curl Number 4, above, is critical: we have set KeepAlive in some other clients (e.g., CoreFTP LE) but we seem to have some routing equipment in the server environment which drops our connection when transferring a 7GB+ file. We have also set passive mode and "resume transfer" functionality seems currently broken with this secure file transport server...so we need to download the file in one go. What FTPS clients might meet our needs?

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Restrict access to ESXi

    - by kdl
    What is the best option to restrict access to ESXi machine so that it could be managed using VSphere client from only certain hosts? I know there is no built-in firewall and everyone recommends placing an ESXi machine behind a firewall, but when this is not an option... Is there any other option like using hosts.allow/deny or anything else? Or I better use ESX instead of ESXi? Edit: In the given circumstances, I am not able to add any additional hardware or use things like managed switches.

    Read the article

  • Formatting minified jQuery, JavaScript using the Internet Explorer 9 Developer Toolbar

    - by Harish Ranganathan
    Much has been talked about the F12 developer toolbar in IE and the support it provides for web developers.  Starting IE8, the Developer Toolbar is a menu item that helps you view the page source, scripts, profiling and many other details of the rendered page.  It even allows script debugging from within and that makes it a truly powerful web developer tool bar. With IE9, the developer toolbar got even better with the Networking Tab that allows you to inspect the traffic/time taken and drill down into the Request/Response headers and other specifics. The script tab allows you to view the scripts used in the page. One of the challenges of working with JavaScript / jQuery when they are minified, is that, it becomes really hard to read.  Minified JavaScript is a compression technique and also a best practice for delivering faster web pages.  However, when you would like to debug, minified JavaScript files become very hard since they aren't properly formatted.  Take the case of the above sample, which is a basic MVC 3 Web Application.  It uses the minified jQuery and modernizr files. Once we select the above scripts, the script source looks as follows:- But with the “Format JavaScript” option in the Configuration icon, Once you click on the “Format JavaScript”, you can see the formatted JavaScript as per screen below:- This makes the script readable and also easy for debugging.  Cheers !!!

    Read the article

  • Is It Time To Specialize?

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/18/is-it-time-to-specialize.aspx Over my career I have made a living as a generalist.  I have been a jack of all trades and a master of none.  It has served me well in that I am able to move from one technology to the other quickly and make myself productive.  Where it becomes a problem is deep knowledge.  I am constantly digging for the things that aren’t basic knowledge.  How do you make a product like WCF or Windows RT do more than just “Hello World”? As an architect I need to be a jack of all trades.  This is what helps me to bring the big picture of a project into focus for developers with different skills to accomplish the goals of the project. It is a key when the mix technologies crosses Windows, Unix and Mainframe with different languages and databases.  The larger the company that the project is for the more likely this scenario will arise. As a consultant and a developer I need to have specialized skills in order to get the job done efficiently.  if I have a SharePoint or Windows Phone project knowing the object model details and possible roadblocks of the technology allow me to stay within budgets as well as better advise the client on technology decisions. What is the solution?  Constant learning and associating with developers who specialize in a variety of technologies is the best thing you can do.  You may have thought you were done with classes when you left college, but in this industry you need to constantly be learning new products and languages.  The ultimate answer is you must generally specialize.  Learn as many subject areas as possible, but go deep when ever you can.  Sleep is overrated.  Good luck. del.icio.us Tags: software development,software architecture,specialization,generalist

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • What's so difficult about SVN merges? [closed]

    - by Mason Wheeler
    Possible Duplicate: I’m a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • How do I get google to see keywords on a one page web application site?

    - by David
    I'm going to have to link to the web site to explain this, http://www.diagram.ly, it's a free service, so I hope this doesn't break advertising rules. Basically, it's a one page web application, I don't want to create a web site for it. Some background text loads and if JavaScript is enabled, the web application itself then loads. The problem is that Google only seems to be picking up the title of the page and the text on the footer, so the site only appear on Google search for very limited text (based on the title and meta description mostly). I was hoping that search engines would pick up on the background text and index that. The text is factual, not keyword stuffed. Yahoo seems to pick up the text, just not Google. Does anyone have any experience of how Google would view such a site and where I could put the text for a better result? Edit I should mention that Google Webmaster Tools lists the site keywords as "Component, diagramly, feed, mxgraph, share and twitter". Basically the footer and little else.

    Read the article

  • Oracle eAM Webcast Series Announced (May-Dec 2010)

    - by [email protected]
    A series of free webinars with ReliabilityWeb will present key product capabilitiesof Oracle eAM and how they support maintenance and reliability best practices. Through this web-seminar series,companies can understand how to achieve better ROI. ReliabilityWeb will be using this as a key component of their initiative tobuild a stronger Oracle community.  For Oracle this program demonstrates leadership and commitment to the Maintenance SystemsMarketplace. Topics: (note all times are EAST)1. How can Oracle eAM enhance and support your reliability program? (May 13,2010) (1-2PM - all times East)) 2. Upgrading to Oracle eAM R12  - What's the value, when's the right time,what's involved and how do you get there? (June 17, 2010) (1-2PM) 3. Improving maintenance and reliability by aligning people, processes andsystems. (July 15, 2010) (1-2PM) 4. Using Oracle eAM to drive your Condition Based Maintenance program. (July29, 2010) (1-2PM) 5. Why and how do you get the power of Oracle eAM out to the people that arereally doing maintenance the technicians. (August 12, 2010) (1-2PM) 6. Standardizing and streamlining your maintenance work with Oracle eAM.(September 16, 2010 (1-2PM) 7. Standardizing maintenance and reliability data - How do you get there?(October 21, 2010 (1-2PM) 8. Using Oracle eAM to establish a Failure Reporting and Corrective ActionSystems (FRACAS). (November 18, 2010) (1-2PM)9. Maintenance Work Scheduling in Oracle eAM - Capabilities and Limitations(December 16, 2010) (1-2PM)to Register:   <http://img.gotomeeting.com/g2mimages/1x1.gif> <http://www1.gotomeeting.com/g2w/images/298420256/73664767535782300/embed.jpg>For additional information contact Jay West, EAM Master,+1.205.515.4326            

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • TeamViewer cannot connect

    - by Cetin Sert
    Last week I decided to use TeamViewer VPN to administer software on a server behind a firewall using RemoteDesktop. It was easy to configure to start-up with the system and make VPN available on the other side but now it fails to connect at the step shown below: The remote machine is running Windows Server 2008 R2. Is there a native way to circumvent the external firewall using a server role or feature to make Windows Server do the VPN work? Do people have better / more reliable experiences with other products such as Hamachi? The requirements are as follows: Start at remote system start-up time Make VPN connections to the remote machine possible

    Read the article

  • Only One Month to OpenWorld-San Francisco!

    - by Stephen Slade
    From around the world, the city is expecting 50,000+ guests to flock to this annual extravaganza.  Over 2,000 sessions will focus on Oracle’s latest product offerings, customer case studies, panels of experts and a variety of other hardware, technology, middleware and applications. For those interested  in the latest capabilities delivered by Oracle’s supply chain applications, the ‘Focus-On’ documents are now avaiable to help guide you in your schedule builder. Schedule builder allows the capability to create a personalized agenda for the sessions you wish to attend, such as: Monday October 1, 2012 TIME TITLE LOCATION  3:15 pm –4:15 pm General Session: Supply Chain Management—Strategy, Update, and Roadmap Richard Jewell, Senior Vice President, Applications Development, Oracle Moscone West Level 2 Room 3014 Tuesday October 2, 2012 TIME TITLE LOCATION  10:15 am –11:15 am Oracle Fusion Supply Chain Management: Overview, Strategy, Customer Experiences, and Roadmap Jon Chorley, CSO & VP, Product Strategy, Oracle Moscone West  Level 2 Room 2006 There is an exciting lineup of about 100 supply chain sessions at OpenWorld. Contact your sales rep or Oracle Partner to obtain a copy of the most current Focus-On document, segmented by pillars such as Manufacturing, Maintenance/EAM, Value Chain Planning, Value Chain Execution, Procurement and Agile/Product Lifecycle Management.  They will provide you with a better informed view to schedule your time in San Francisco.

    Read the article

  • links for 2010-06-04

    - by Bob Rhubart
    @biemond: JEJB Transport and manipulating the Java Response in OSB 11g "JEJB Transport works like the EJB Transport," says Oracle ACE Edwin Biemond, "but the request and response objects are not translated to XML so you can't use XQuery etc. To make things not too hard, OSB 11g makes a XML presentation of the request method and its parameters, which you can use in the Proxy Service." (tags: oracleace soa oracle jejb java) @bex: Oracle UCM jQuery Plugin  "This connector allows you to use jQuery to make UCM Service calls through AJAX, and easily display the results,: says Oracle Ace Director Bex Huff. "This is 100% pure JavaScript, no Java, Idoc, or ADF required!" (tags: oracleace ucm oracle otn enterprise2.0) Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available (Oracle Developer Tools Blog) "Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future," says Pieter Humphrey. (tags: oracle otn solaris sparc liunux) @soatoday: EA and SOA Should Report to COO "So, who gets EA-- the CIO or VP of a Business? I argue neither! After all, a typical EA goal is to connect the Business and IT together to impart better structure and visibility across the enterprise. I firmly believe that neither should own EA so that neither imparts too much of their organization (i.e bias) on the EA process and deliverables. EA needs to be independent, and it's for all the right reasons." -- Orace ACE Director JOrdan Braunstein (tags: oracleace entarch soa)

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >