Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 246/595 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • SQL Server SQLCMD Basics

    Sqlcmd makes many SQL Server tasks, such as automating test runs and maintenance tasks, easier and quicker. The sqlcmd command-line utility is valuable to any database developer or DBA as the prime means of executing batches of SQL Statements to SQL servers, and saving results to file. Rob Sheldon gives you the basic facts about this great utility. Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Transfering site from shared to vps account

    - by N e w B e e
    I got a hosting account at HOSTGATOR. it was a shared account and i had three websites present over there. now, i have upgraded my account to VPS at HOSTGATOR and i want to transfer only ONE website over the new VPS. say, mydom.net. this website includes wordpress installation and other custom pages and setup can somebody please guide me How can I transfer the web to my new account? with speed, accuracy, and such that my website remains in working condition.. what will I do about wordpress? simply copy it will work?(I dont thnk so), if not how can I move it? I need guideline. and I am asking the question with a hope that many others will also learn the things just as i am learning,, and one quick thing, answering it like creating a manual can help people a lot...that's my thinking.. thanks to all

    Read the article

  • NetAdvantage - jQuery, ASP .NET MVC and HTML5 UI Components released for Web Developers

    Built for speed and portability across operating systems, iPad/tablets, desktops and multi-browser support. Includes controls for ASP .NET MVC and uses the latest technologies like HTML 5 & CSS 3. This preview includes a sampling of powerful UI controls: grid, date picker, rating, editors, even a video player! All work with the popular WebKit engine that underpins many modern desktop browsers without requiring plug-ins or extensions. The grid embraces the latest Web techniques and frameworks like jQuery Client Templates and DOM virtualization. Download these essentials for jQuery and ASP .NET MVC from us today. span.fullpost {display:none;}

    Read the article

  • Sharing Large Database Backup Among Team

    - by MattGWagner
    I work on a team of three - five developers that work on an ASP.net web application remotely. We currently run a full local database from a recent backup on all of our machines during development. The current backup, compressed, is about 18 GB. I'm looking to see if there's an easier way to keep all of our local copies relatively fresh without each of us individually downloading the 18 GB file over HTTP from our web server on a regular basis. I guess FTP is an option, but it won't speed the process up at all. I'm familiar with torrents and the thought keeps hitting me that something like that would be effective, but I'm unsure of the security or the process.

    Read the article

  • Sensitive middle-button (mouse)

    - by Gilead
    Whenever I click on the middle-button on my mouse-wheel, it seems that multiple click events are being sent in succession. Case in point: when I middle-click a bookmark folder in Chrome (to open up all the bookmarks in tabs), Chrome opens up the same set of bookmarks three times. This is very annoying. Is there a way in X or Gnome to detune the sensitivity of my mouse's middle button to prevent multiple click events from being sent? Maybe like a double-click speed adjustment for the middle-button?

    Read the article

  • Collisions and Lists

    - by user50635
    I've run into an issue that breaks my collisions. Here's my method: Gather Input Project Rectangle Check for intersection and ispassable Update The update method is built on object_position * seconds_passed * velocity * speed. Input changes velocity and is normalized if 1. This method works well with just one object comparison, however I pass a list or a for loop to the collision detector and velocity gets changed back to a non zero when the list hits an object that passes the test and the object can pass through. Any solutions would be much appreciated. Side note is there a more proper way to simulate movement?

    Read the article

  • Images from remote source - is it possible or is really a bad practice?

    - by user1620696
    I'm building a management system for websites and I had an idea related to image galleries that I'm not sure it's a good approach. Since images might need good deals of space depending on how much images a user uploads an so on, I thought on using cloud services like dropbox, mega and google drive to store images and load then when needed. The obvious problem is that for me this seems a useless solution because it would be slow to download the images from the remote source, making the user experience not so good. Is there any way to save images of a image gallery on remote source without getting the user experience bad because of speed? Or this is really not a good practice?

    Read the article

  • Estimating file transfer time over network?

    - by rocko
    I am transferring file from one server to another. So, to estimate the time it would take to transfer some GB's of file over the network, I am pinging to that IP and taking the average time. For ex: i ping to 172.26.26.36 I get the average round trip time to be x ms, since ping send 32 bytes of data each time. I estimate speed of network to be 2*32*8(bits)/x = y Mbps -- multiplication with 2 because its average round trip time. So transferring 5GB of data will take 5000/y seconds Am I correct in my method of estimating time. If you find any mistake or any other good method please share.

    Read the article

  • Calculating Values within a Rolling Window in Transact SQL

    Before the SQL Window functions were implemented, it was tricky to calculate rolling totals or moving averages efficiently in SQL Server. There are now a number of techniques, but which has the best performance? Dwain Camps gets out the metaphorical stopwatch. Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Slow tranfer to external USB3 hard drive

    - by JMP
    Trying to backup data from hard drive before reloading windows following some issue with its load. Having trouble with the file transfer to a USB3/2 external hard drive NTFS. Getting transfer speed of about 116.7kB/sec. In other words its taking about 5 hours to tranfer 1.4GB. I've got about 80GB to go. So the transfer is going to take 11days. Seems a little on the slow side. Am I missing something? Is there a way to make this faster. No issue with the external drive tranferring this amount in windows. But don't have that option at the moment.

    Read the article

  • Rendering trillions of "atoms" instead of polygons?

    - by Baring
    I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail. They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works. The level of detail in their video does look quite impressive to me. How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise) If this is real, why has nobody else even thought about it so far? I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

    Read the article

  • nVidia GT 220 not working properly with Ubuntu 12.10

    - by Glaedr
    I used to enable proprietary nVidia drivers on every previous Ubuntu release to get it working properly (elseway I was forced to a very low resolution and no graphic acceleration), and everything worked fine then. In particular, I noticed - still I don't know why - that the GPU fan on every OS gets noisy until the video drivers are loaded (runlevel 5, it seems, in Linux), and then slows down to a normal speed. Today I installed 12.10. Running the Live CD, surprisingly, everything was working fine: full resolution, acceleration, silent fan, and so on. The running driver was nvidia-current (GT 216). After installing and booting I found that the fan was overrunning. The installed driver is nouveau. I tried installing nvidia-current, or any other proprietary driver, even installing the kernel headers & source and then the drivers (as suggested here), but all I'm getting with proprietary drivers is, the irony, low resolution, noisy fan and no acceleration (thus obviously unity and compiz refusing to start). Does anybody know a way out?

    Read the article

  • Simulating a sine wave/oscillating pattern for enemies

    - by Sun
    I'm creating a simple top down shooter, right now I have an enemy which simply follows the player. I'd like to change things up and have the enemies move towards the player but in a wave like motion. I have looked at some similar questions like this but they don't take into account for the Y changing. How can I simulate a wave like pattern for my enemies whilst they are homing into their target. Edit: Sample code In my update method I have the following: Vector2 trackingPos = position - target; trackingPos.Normalize(); position -= trackingPos * elaspedTime * speed;

    Read the article

  • How can I disable as much "flair" as possible in Unity?

    - by UlfR
    I'm running Ubuntu 11.10 from a fresh install on a somewhat aged computer, so I have some speed problems. But since I do want to use the new cool way to work with things that Unity introduces I do not want to fiddle with getting another desktop environment. My idea is to disable as much flair as possible, but there are tonnes of options that I do not understand or have the time to read up on when I launch CompizConfig Settings Manager. Which of these could safely be disabled? Are there other places where performance could be gained by disabling things? So how can I make the whole thing a little more snappy?

    Read the article

  • for vs. foreach vs. LINQ

    - by beccoblu
    When I write code in Visual Studio, ReSharper (God bless it!) often suggests me to change my old-school for loop in the more compact foreach form. And often, when I accept this change, ReSharper goes a step forward, and suggests me to change it again, in a shiny LINQ form. So, I wonder: are there some real advantages, in these improvements? In pretty simple code execution, I cannot see any speed boost (obviously), but I can see the code becoming less and less readable... So I wonder: is it worth it?

    Read the article

  • How do I reserve bandwidth?

    - by Kaktarua
    In windows there is a registry entry called Reserve bandwidth. With that system was reserved some bandwidth from my INTERNET connection so that when i am downloading all other connected application can run with minimum INTERNET support. Like reserve bandwidth help to keep other application (all kind of messenger) on-line when i was downloading. But in Ubuntu my connection speed is good enough but problem arise when i am downloading a file, all my messenger gets off-line status. Can any one tell how can i fix such problem? Or why this is happening?

    Read the article

  • Getting wifi working on 14.04

    - by user286114
    I have installed Ubuntu 14.04 on my laptop. When I plug in the ethernet cable the internet works fine, but I can't see any wireless network in the networking manager. The wifi switch is definitely on on my laptop! It's a Dell XPS M1330. I'm not sure what the network card is - how can I find out? nm-tool gives me this: NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: 00:23:AE:28:FE:A2 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.1.26 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.254 DNS: 192.168.1.254

    Read the article

  • Tuning Distributed Applications to Access Big Data

    Distributed applications are just that: distributed across one or more hardware platforms across the enterprise. The database administrator (DBA) has the unenviable task of monitoring these environments and configuring and tuning the database server to meet multiple needs. As multiple distributed applications now require access to a very large data store, what tuning options are available to help? Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • What is the easiest and fastest way to display an SDL_Surface in a window with SDL2?

    - by Semmu
    I would like to have an SDL_Surface representing the contents of the window, just like in the old days with SDL1.2. What is the best and fastest way to do it in SDL2? What I found is that I need an SDL_Window, an SDL_Renderer for that window, an SDL_Texture to render, and an SDL_Surface to create a texture from. This seems a bit too much to me, since I just want to display a single image on the screen. Not to mention the impact on the performance. On my machine (Lenovo Y510p laptop) this whole procedure takes 9ms, without any memory allocation, only using pre-allocated variables and totally black SDL_Surface. Is there a way I could speed up things?

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Download YouTube Videos the Easy Way

    - by Trevor Bekolay
    You can’t be online all the time, and despite the majority of YouTube videos being nut-shots and Lady Gaga parodies, there is a lot of great content that you might want to download and watch offline. There are some programs and browser extensions to do this, but we’ve found that the easiest and quickest method is a bookmarklet that was originally posted on the Google Operating System blog (it’s since been removed). It will let you download standard quality and high-definition movies as MP4 files. Also, because it’s a bookmarklet, it will work on any modern web browser, and on any operating system! Installing the bookmarket is easy – just drag and drop the Get YouTube video link below to the bookmarks bar of your browser of choice. If you’ve hidden the bookmark bar, in most browsers you can right-click on the link and save it to your bookmarks. Get YouTube video   With the bookmarklet available in your browser, go to the YouTube video that you’d like to download. Click on the Get YouTube video link in your bookmarks bar, or in the bookmarks menu, wherever you saved it earlier. You will notice some new links appear below the description of the video. If you download the standard definition file, it will save as “video.mp4” by default. However, if you download the high definition file, it will save with the same name as the title of the video. There are many methods of downloading YouTube videos…but we think this is the easiest and quickest method. You don’t have to install anything or use up resources, but you can still get a link to download an MP4 with one click. Do you use a different method to download Youtube videos? Let us know about it in the comments! javascript:(function(){if(document.getElementById(’download-youtube-video’))return;var args=null,video_title=null,video_id=null,video_hash=null;var download_code=new Array();var fmt_labels={‘18′:’standard%20MP4′,’22′:’HD%20720p’,'37′:’HD%201080p’};try{args=yt.getConfig(’SWF_ARGS’);video_title=yt.getConfig(’VIDEO_TITLE’)}catch(e){}if(args){var fmt_url_map=unescape(args['fmt_url_map']);if(fmt_url_map==”)return;video_id=args['video_id'];video_hash=args['t'];video_title=video_title.replace(/[%22\'\?\\\/\:\*%3C%3E]/g,”);var fmt=new Array();var formats=fmt_url_map.split(’,');var format;for(var i=0;i%3Cformats.length;i++){var format_elems=formats[i].split(’|');fmt[format_elems[0]]=unescape(format_elems[1])}for(format in fmt_labels){if(fmt[format]!=null){download_code.push(’%3Ca%20href=\”+(fmt[format]+’&title=’+video_title)+’\'%3E’+fmt_labels[format]+’%3C/a%3E’)}elseif(format==’18′){download_code.push(’%3Ca%20href=\’http://www.youtube.com/get_video?fmt=18&video_id=’+video_id+’&t=’+video_hash+’\'%3E’+fmt_labels[format]+’%3C/a%3E’)}}}if(video_id==null||video_hash==null)return;var div_embed=document.getElementById(’watch-embed-div’);if(div_embed){var div_download=document.createElement(’div’);div_download.innerHTML=’%3Cbr%20/%3E%3Cspan%20id=\’download-youtube-video\’%3EDownload:%20′+download_code.join(’%20|%20′)+’%3C/span%3E’;div_embed.appendChild(div_download)}})() Similar Articles Productive Geek Tips Watch YouTube Videos in Cinema Style in FirefoxDownload YouTube Videos with Cheetah YouTube DownloaderStop YouTube Videos from Automatically Playing in FirefoxImprove YouTube Video Viewing in Google ChromeConvert YouTube Videos to MP3 with YouTube Downloader TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >