Search Results

Search found 126 results on 6 pages for 'responsiveness'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • Linux Distro - GUI similar to Windows

    - by DeaconDesperado
    I am in the process of refurbing several older laptop machines for use by a couple college guys we have in training to learn basic web development in python. These are students who intern at my company and are hoping to do some work when the summer comes building simple client-oriented webapps (learning the basics of OOP, MVC webapp design in flask, etc.). We're trying to function as the "practical" side of their education. I would like to get them set up on these machines we have sitting about, but I'd like to use a linux distro that would have a gui that closely approximates what they are being compelled to use at school (windows.) I don't really have much of a preference as far as GUI goes since much of what we'll be learning together is accomplished on the command line. I just see this as an easier adjustment for them while they are still reliant on a graphical environment. In the past I'd go straight for Ubuntu, but since they started using the Unity GUI the responsiveness overall can be pretty clunky on older machines, especially since these machines (there are four of them) run the gambit on specs (though all are at least 1.0Ghz and none have anything better than basic integrated video.) Has anyone had to setup a similar working environment in Mint, bare Debian or Zorin? Thanks.

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • #iPad at One Week: A Great Device Made with a Heavy Hand

    - by andrewbrust
    I have now had my iPad for a little over a week. In that time, Apple introduced the world to its iPhone OS 4 (and the SDK agreement’s draconian new section 3.3.1), HP introduced is Slate, and Microsoft got ready to launch Visual Studio 2010 and .NET 4.0. And through it all I have used my iPad. I've used it for email, calendar, controlling my Sonos, and writing an essay. I've used it for getting on TripIt and Twitter, and surfing the Web. I've used it for online banking, and online ordering and delivery of food. And the verdict? Honestly? I think it's a great device and I thoroughly enjoy using it. The screen is bright and vibrant. I am surprisingly fast and accurate when I type on it. The touch screen's responsiveness is nearly flawless. The software, including a number of third party applications, include pleasing animations and use of color that make it fun to get work done. And speaking of work, the Exchange integration is, dare I say it, robust. Not as full-featured as on a PC or Windows Mobile device, but still offering core functionality and, so far at least, without bugs. The UI is intuitive, not just to me, but also to my 5 1/2 year old, and also to my nearly-3-year-old son. They picked it up and, with just just a few pointers from me, they almost immediately knew what to do, whether they were looking at photos (and swiping/flicking along as they did so), using a drawing program, playing a game, or watching YouTube videos. The younger of the two of them even tried to get up on a chair and grab the thing today. He dropped it, from about 4 feet off the ground. And it's still fine. (Meanwhile, I'll be keeping it on a higher shelf.) I cannot fully describe yet what makes this form factor and this product so appealing. Maybe it's that it's an always-on device. Maybe it's just being able to hold such a nice, relatively large display so close. Maybe it's the design sensibility, that seems to pervade throughout the app ecosystem. Or maybe it's that one's fingers, and not pens or mice, are the software's preferred input device. Whatever the attraction, it's strong. And no matter how much I tend to root for Microsoft and against Apple, Cupertino has, in my mind, scored big, Can Microsoft compete? Yes, but not with the Windows 7 standard UI (nor with individual OEMs’ own UIs on top). I hope Microsoft builds a variant of the Windows Phone 7 specifically for tablet devices. And I hope they make it clear that all developers, and programming languages, are welcome to the platform. Once that’s established, the OEMs have to build great hardware with fast, responsive touch screens, under Microsoft's watchful eye. That may be the hardest part of getting this right. No matter what, Microsoft's got a fight on its hands. I don't know if it can count on winning that fight, either. But Silverlight and Live Tiles could certainly help. And so can treating developers like adults.  Apple seems intent on treating their devs like kids, and then giving the kids a curfew.  For that, dev-friendly Microsoft may one day give thanks.

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • Multithreading recommendation based on program description

    - by user260197
    I would like to describe some specifics of my program and get feedback on what the best multithreading model to use would be most applicable. I've spent a lot of time now reading on ThreadPool, Threads, Producer/Consumer, etc. and have yet to come to solid conclusions. I have a list of files (all the same format) but with different contents. I have to perform work on each file. The work consists of reading the file, some processing that takes about 1-2 minutes of straight number crunching, and then writing large output files at the end. I would like the UI interface to still be responsive after I initiate the work on the specified files. Some questions: What model/mechanisms should I use? Producer/Consumer, WorkPool, etc. Should I use a BackgroundWorker in the UI for responsiveness or can I launch the threading from within the Form as long as I leave the UI thread alone to continue responding to user input? How could I take results or status of each individual work on each file and report it to the UI in a thread safe way to give user feedback as the work progresses (there can be close to 1000 files to process) Update: Great feedback so far, very helpful. I'm adding some more details that are asked below: Output is to multiple independent files. One set of output files per "work item" that then themselves gets read and processed by another process before the "work item" is complete The work items/threads do not share any resources. The work items are processed in part using a unmanaged static library that makes use of boost libraries.

    Read the article

  • New to MVVM - Best practices for seperating Data processing thread and UI Thread?

    - by OffApps Cory
    Good day. I have started messing around with the MVVP pattern, and I am having some problems with UI responsiveness versus data processing. I have a program that tracks packages. Shipment and package entities are persisted in SQL database, and are displayed in a WPF view. Upon initial retrieval of the records, there is a noticeable pause before displaying the new shipments view, and I have not even implemented the code that counts shipments that are overdue/active yet (which will necessitate a tracking check via web service, and a lot of time). I have built this with the Ocean framework, and all appears to be doing well, except when I first started my foray into multi-threading. It broke, and it appeared to break something in Ocean... Here is what I did: Private QueryThread As New System.Threading.Thread(AddressOf GetShipments) Public Sub New() ' Insert code required on object creation below this point. Me.New(ViewManagerService.CreateInstance, ViewModelUIService.CreateInstance) 'Perform initial query of shipments 'QueryThread.Start() GetShipments() Console.WriteLine(Me.Shipments.Count) End Sub Public Sub New(ByVal objIViewManagerService As IViewManagerService, ByVal objIViewModelUIService As IViewModelUIService) MyBase.New(objIViewModelUIService) End Sub Public Sub GetShipments() Dim InitialResults = From shipment In db.Shipment.Include("Packages") _ Select shipment Me.Shipments = New ShipmentsCollection(InitialResults, db) End Sub So I declared a new Thread, assigned it the GetShipments method and instanced it in the default constructor. Ocean freaks out at this, so there must be a better way of doing it. I have not had the chance to figure out the usage of the SQL ORM thing in Ocean so I am using Entity Framework (perhaps one of these days i will look at NHibernate or something too). Any information would be greatly appreciated. I have looked at a number of articles and they all have examples of simple uses. Some have mentioned the Dispatcher, but none really go very far into how it is used. Anyone know any good tutorials? Cory

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • Debugging an unresponsive iPhone UI

    - by buggles
    I have an application that needs to update its display every minute or so. To achieve this I was using performSelector:withObject:afterDelay, calling the selector that most of the time just changes some text in a label based on a very simple (and quick) calculation. [self performSelector:@selector(updateDisplay) withObject:nil afterDelay:60]; Occasionally during an update, I have to go off and get some data from the web, and so I do that in another thread using detachNewThreadSelector. That all worked and the "performSelector after Delay" call completes in a tiny fraction of a second, and only runs once a minute. Despite this, and despite running fine on the simulator, the single button in the app is largely unresponsive, not responding to multiple stabs. So, I had assumed peformSelector:afterDelay would not block, but I'm now wondering if it is blocking in some way? I even tried NOT doing the web-look-up incase this was somehow still impacting the responsiveness. No joy. [NSThread detachNewThreadSelector:@selector(updateFromURL) toTarget:self withObject:nil]; I then pushed it through shark to see if I could see anything obvious. From here I can see the web-lookup is the only thing taking any time, but it is only being done every couple of minutes, and then clearly not running on the main thread. The app itself is consuming a tiny fraction of 1% of the CPU (0.0000034%) over 20 minutes, so it just must be a blocking issue. So, am I missing something about performSelector:afterDelay? What other common newbie mistakes might I be making. If it helps, although I've been developing applications for over 20 years, the previous 10 have been largely Java. Perhaps I have a Java assumption loaded :-) Essentially I have assumed the main thread is like the EDT (only do UI stuff on it, but keep everything else off it).

    Read the article

  • javascript table sorting/paging (client-side). How big is too big?

    - by Aheho
    I'm using a jQuery plugin called Tablesorter to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in. I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly. However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records? EDIT: In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • call addsubview again causes slowdown

    - by Tom
    hi guys, i am writing a little music-game for the iphone. I am almost done, this is the only issue which keeps me from rolling it out. any help to solve this is much appreciated. this is what i do: at my appDelegate I add my menu-view-screen to the window. the menu-view-screen acts as a container and controls which view gets presented to the user. means, on the menu-view-screen I got 4 buttons (new game, options, faq, highscore). when the user clicks on a button something as this happens: if (self.gameViewController == nil) { GameViewController *viewController = [[GameViewController alloc] initWithNibName:@"GameViewController" bundle:nil]; self.gameViewController = viewController; [viewController release]; } [self.view addSubview:self.gameViewController.view]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleSwitchViewNotificationFromGameView:) name:@"SwitchView" object:gameViewController]; when the user returns to the menu, this piece of code gets executed: [[NSNotificationCenter defaultCenter] removeObserver:self]; [self.gameViewController viewWillDisappear:YES]; [self.gameViewController.view removeFromSuperview]; this works fine for all screens but not for the gamescreen(well this is the only one with heaps of user-interaction) means the responsiveness of the iphone(when playing tones) gets really slow. The performance is fine when I display the gameview for the first time. it starts getting slower as soon as I add it to the menu-views-container-subviews again (addsubview) (basically open up a new game) any ideas what causes(or to get around) this? thanks heaps Best regards Tom

    Read the article

  • Align col in a bootstrap collapsable menu

    - by Grimm
    I got my hands on bootstrap recently and I'm still discovering it. I made collapsable menu from a tutorial online but now that I want to had an image on each entry of my menu that I wasn't expecting. I want my image to be always aligned to the text in the menu but it still getting on top of it. I tried to remove row and col tag and forget about the responsiveness of my menu but it still doesn't work... Here is the source code of my menu: <div id="menu" class="menu nav-collapse collapse width"> <div class="collapse-inner"> <div class="navbar navbar-inverse"> <div class="menu_titlenav nav-tabs nav-stacked"> <h3>Menu</h3> </div> </div> <div class="row well menu_entry"> <div class="span2 search_ico_ina"></div> <div class="span9 search_ent_ina">Recherche</div> </div> <div class="row well menu_entry" > <div class="span2 pro_ico_ina"></div> <div class="span9 pro_ent_ina">Espace PRO</div> </div> <div class="row well menu_entry"> <div class="span2 account_ico_ina"></div> <div class="span9 account_ent_ina">Mon Compte</div> </div> </div> </div> and the entire source: http://jsfiddle.net/Grimtork/JLFMW/

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >