Search Results

Search found 69968 results on 2799 pages for 'real time updates'.

Page 404/2799 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • finding houses within a radius

    - by paul smith
    During an interview I was asked given the following: A real estate application that lists all houses that are currently on the market (i.e., for sale) within a given distance (say for example the user wants to find all houses within 20 miles), how would you design your application (both data structure and alogirithm) to build this type of service? Any ideas? How would you implement it? I told him I didn't know becaue I've never done any geo-related stuff before.

    Read the article

  • My First 5K, the recap.

    - by Chris Williams
    It was a nice day to be outside (and trust me, those aren't words you'll hear from me often.) I got to the site around 7:45, hit the pre-reg table and got my number along with a goody bag full of coupons for racing gear, a water bottle and a tshirt. Oh and a map. Stashed all that stuff in the jeep, emptied my pockets of everything but my iPhone and my jeep key, and proceeded to walk around for a bit as people started showing up and signing in. It was fairly breezy, and there was definitely a storm coming... but it was anyone's guess on when it would actually arrive. It was interesting to see everyone who was participating. If I had to guess, I would say the event was 60-70% women, with a pretty broad distribution of age... as young as 13 to well over 60 (in both genders.) I don't know exactly how many folks were there, but it was well over 300. Eventually it was time to kick things off, and everyone made their way to the start line. All of the 5k and 10k runners were mixed together, starting at the same time. All the walkers and the people with strollers or dogs were in the back. It was pretty chaotic at first, once things started, but it thinned out fairly quickly. The 10k people and the hardcore runners sped ahead of everyone else and the walkers gradually lagged behind. The 5K course was pretty nice, winding around a lake down in Eden Prairie. The 10K course overlapped most of ours, but branched off a couple times too. I didn't run the whole time, but I started the race running and I ended it running, and did a mix of walking and running along the way. I met my goals, which were a) don't ever stop and b) don't be last. The weather managed to hold out for the entire race. It never got too hot, there was a nice breeze and it was mostly overcast. Pretty much perfect in my book. About 20-30 minutes after I left, the rain came down pretty hard. I had a good time, and will most likely do more of them. We'll see.

    Read the article

  • Plans for Certifying Oracle Database 12c with E-Business Suite

    - by Steven Chan (Oracle Development)
    The Oracle Database 12c is now officially released.  We're as excited about this new database release as you are.  In fact, we've been testing a wide variety of E-Business Suite releases and configurations with internal DB 12c betas for some time.  This testing is going well, but as usual, Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates You're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.   

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

  • Nagios core Event Handler not working

    - by sivashanmugam
    Nagios Event Handler is not triggering when the service is taking more time to response or down. My configuration in below nagios.cfg enable_event_handlers=1 localhost.cfg define service { use generic-service host_name Server service_description test-server servicegroups test-service check_command check-service is_volatile 0 check_period 24x7 max_check_attempts 4 normal_check_interval 2 retry_check_interval 2 contact_groups testcontacts notification_period 24x7 notification_options w,u,c,r notifications_enabled 1 event_handler_enabled 1 event_handler recheck-service } command.cfg define command{ command_name recheck-service command_line /usr/local/nagios/libexec/alert.sh $SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$ } alert.sh file !/bin/sh set -x case "$1" in OK) # The service just came back up, so don't do anything... ;; WARNING) # We don't really care about warning states, since the service is probably still running... ;; UNKNOWN) # We don't know what might be causing an unknown error, so don't do anything... ;; CRITICAL) Aha! The HTTP service appears to have a problem - perhaps we should restart the server... Is this a "soft" or a "hard" state? case "$2" in We're in a "soft" state, meaning that Nagios is in the middle of retrying the check before it turns into a "hard" state and contacts get notified... SOFT) # What check attempt are we on? We don't want to restart the web server on the first check, because it may just be a fluke! case "$3" in Wait until the check has been tried 3 times before restarting the web server. If the check fails on the 4th time (after we restart the web server), the state type will turn to "hard" and contacts will be notified of the problem. Hopefully this will restart the web server successfully, so the 4th check will result in a "soft" recovery. If that happens no one gets notified because we fixed the problem! 3) echo -n "Going To Ping the Virtual Machine (3rd soft critical state)..." # Call the init script to restart the HTTPD server myresult=`/usr/local/nagios/libexec/check_http xyz.com -t 100 | grep 'time'| awk '{print $10}'` echo "Your Service Is taking the following time Delay" "$myresult Seconds" |mail -s "WARNING : Service Taken More Time To Response" [email protected] ;; esac ;; # The HTTP service somehow managed to turn into a hard error without getting fixed. # It should have been restarted by the code above, but for some reason it didn't. # Let's give it one last try, shall we? # Note: Contacts have already been notified of a problem with the service at this

    Read the article

  • ubuntu lite volume control

    - by idio
    i've recently gave up on the main ubuntu, especially since new updates ruined Quantal and Raring is just a mess and went on to install a more friendly version: Ubuntu Lite. while i am quite happy with most functionalities of this lite version of 12.04, one of the modifications volume control is totally ridiculos, there is an applet that is completely counter-intutitive. So my question is: is there a way to install back the ubuntu classic volume control so i can let go of this applet alltogether? thanx

    Read the article

  • Chrome OS is missing or damaged

    - by Ken
    My Google Chrome CR-48 started flaking out/rebooting and finally this message.  This post solved the problem quite easily. http://cr-48.wikispaces.com/Reseat+SSD+Cable Two hints: 1) you need to pull off the rubber feet to get at some screws. 2) the real problem is the little white clip under the cable.  Don’t worry about reseating anything, Just push the cable back on and the little white clip back up to snap in place and hold the cable.

    Read the article

  • Best Advice Ever: Learn By Helping Others

    - by Argenis
    I remember when back in 2001 my friend and former SQL Server MVP Carlos Eduardo Rojas was busy earning his MVP street-cred in the NNTP forums, aka Newsgroups. I always thought he was playing the Sheriff trying to put some order in a Wild Wild West town by trying to understand what these people were asking. He spent a lot of time doing this stuff – and I thought it was just plain crazy. After all, he was doing it for free. What was he gaining from all of that work? It was not until the advent of Twitter and #SQLHelp that I realized the real gain behind helping others. Forget about the glory and the laurels of others thanking you (and thinking you’re the best thing ever – ha!), or whatever award with whatever three letter acronym might be given to you. It’s about what you learn in the process of helping others. See, when you teach something, it’s usually at a fixed date and time, and on a specific topic. But helping others with their issues or general questions is something that goes on 24x7, on whatever topic under the sun. Just go look at sites like DBA.StackExchange.com, or the SQLServerCentral forums. It’s questions coming in literally non-stop from all corners or the world. And yet a lot of people are willing to help you, regardless of who you are, where you come from, or what time of day it is. And in my case, this process of helping others usually leads to me learning something new. Especially in those cases where the question isn’t really something I’m good at. The delicate part comes when you’re ready to give an answer, but you’re not sure. Often times I’ll try to validate with Internet searches and what have you. Often times I’ll throw in a question mark at the end of the answer, so as not to look authoritative, but rather suggestive. But as time passes by, you get more and more comfortable with that topic. And that’s the real gain.  I have done this for many years now on #SQLHelp, which is my preferred vehicle for providing assistance. I cannot tell you how much I’ve learned from it. By helping others, by watching others help. It’s all knowledge and experience you gain…and you might not be getting all that in your day job today. Such thing, my dear reader, is invaluable. It’s what will differentiate yours amongst a pack of resumes. It’s what will get you places. Take it from me - a guy who, like you, knew nothing about SQL Server.

    Read the article

  • Multiple vulnerabilities in fetchmail

    - by Umang_D
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-3389 Improper Input Validation vulnerability 4.3 fetchmail Solaris 11 11/11 SRU 12.4 CVE-2012-3482 Denial of Service vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Set-based Speed Phreakery: The FIFO Stock Inventory SQL Problem

    The SQL Speed Freak Challenge is a no-holds-barred competition to find the fastest way in SQL Server to perform a real-life database task. It is the programming equivalent of drag racing, but without the commentary box. Kathi has stepped in to explain what happened with the second challenge and why some SQL ran faster than others.

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance Issues

    982896 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance IssuesThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Podcast Available - Fusion DOO for Multi-Channel Retail

    - by Pam Petropoulos
    Oracle Fusion Distributed Order Orchestration can help retailers standardize their order and fulfillment processes across all channels.  Listen to the latest podcast entitled “Unify Sales and Fulfillment in Multi-Channel Retail with Fusion DOO” and discover how Fusion Distributed Order Orchestration can deliver value to retail customers and also hear real world examples of how customers are using it today.  Click here to listen to the podcast.

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • When checking to see if I could update anything, get error message crossover and cannot open update manager

    - by Heather
    I am running ubunt 11.10 only on my computer. I had it off for a few months and wanted to see if it needed any updates. When I try to click on Update Manager, I get the following error message: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Opening /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-games_ubuntu.list - ifstream::ifstream (13: Permission denied)'

    Read the article

  • Idera Announces SQL Compliance Manager 3.6

    Perhaps the main highlight of SQL compliance manager 3.6's impressive set of features is its ability to actively track any activities of privileged users. When users of high administrative privileges access column groups in monitored tables, SQL compliance manager 3.6 issues alerts to security administrators, compliance officers, IT auditors, and the like in a proactive manner. Such functionality allows the product to provide an extra barrier against the possibility of insider threats to an organization's data. Idera developed SQL compliance manager to supply its clients with real-time audit...

    Read the article

  • Why Is it better to use unreadable bytes for client server communication?

    - by Alessa
    I'm composing communication lyrics for client-server and what am I thinking about: "authme username passord" (maybe encrytped) "accept" "get archive of H2O from 03.02.2005 to 20.12.2064" transferring binary structure or "error descrtiption" why I always need to do something like 0x0FA52FD + CRC 0x0D34423 + CRC ... I can see some secure reasons but I think it's not the real reason so why I can't use strings in client-server communication?

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >