Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 20/595 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Oracle auf der CeBIT 2011 in Hannover

    - by franziska.schneider(at)oracle.com
    Cloud Computing als Organisationsstrategie in heterogenen Umgebungen 02.03.2011, 15:40 - 16:00 Halle 4, Stand A 58 Veranstalter: BITKOM Veranstaltungsreihe: Cloud Computing World Referent: Helene Lengler, Vice President, ORACLE Deutschland B.V. & Co. KG   Weiterhin können Sie viele Oracle Partner auf der CeBIT treffen. Schreiben Sie uns einfach mit Ihrem Themenwunsch an und wir organisieren einen Termin.

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • How can I set the fan speed to 100% on a laptop?

    - by Codemonkey
    I recently purchased a HP Pavilion dv7-4031. When it's cool, it works smoothly and efficiently. However when CPU and GUP temperatures reach 60c and above, the PC starts freezing up and stuttering. I can hear that the fan speeds steadily increase all the way up to 70-80c. This is what pisses me off: I want the fan speeds to run 100% all the time, perhaps preventing the high temperatures in the first place. The way it is now, fan speeds only increase to keep internal temperatures at above 60c. I've searched all over for any sort of speed control, finding nothing. Any help appreciated. I have tried Speedfan. In "Fans" there is nothing listed. I took that as a bad sign. The BIOS is pathetic, and only has 4 or 5 changeable settings, including "Quickstart" and "Boot order"

    Read the article

  • Why is the network speed on my Mac Pro (early 2009) so slow?

    - by Rafael
    I have a really weird networking issue on my Mac Pro (Early 2009). I can’t get higher network speeds than about 2MBit/s. It doesn’t matter if this is over AirPort or one of the Ethernet ports. An iMac and a Mac mini in the same network with almost the same configuration get about 25-30 MBit/s. I’ve read a couple of things about this on the official Apple forums, but there is no helpful information. Anyone else with Mac Pro network speed issues and who knows how to solve them?

    Read the article

  • Windows 7 cannot burn a DVD-R at 4x or 8x? (it seems to have to use the MAX speed)

    - by Jian Lin
    Windows 7 is great to have the ability to burn an .iso into a DVD-R, but it seems like there is no way to change the speed to 8x or 4x? (so it may go "max", which is 16x) because some of my DVD drives cannot read data all the time if the disc was burned at 16x, that's why usually i limit it to 8x. also, some articles say that disc burned at 4x can last a longer time, so i am tempted to burn something at 4x sometimes. any method or hack that can make it happen?

    Read the article

  • The speed of copying a file from a PC to a USB Flash drive started at 30MB/s and decreased to 5.8 MB

    - by Jian Lin
    If I copy a 8GB file from the PC to a USB Flash Drive, the speed will start at around 30 MB/s... maybe 28 MB/s, and then gradually, after a minute, it will go down to 15 MB/s and finally settle down at 5.8 MB/s. But I thought if it is a hard drive, then probably there is the RAM cache and also the internal hard drive cache, and will make the copying of file from PC to hard drive appear fast at first. But for a USB Flash drive, there should be no internal cache for the USB Flash drive itself. Is there a RAM cache for it, so that's why the initial copying seems so fast?

    Read the article

  • Downloads speed starts ok but after a few seconds they go down to 10kbps?

    - by peterg
    Since yesterday any file I try to download from any host start downloading ok (at 1mbps) but after a few seconds the speed start to decrease down to 10kbps. I use a mobile modem (huawei) and I connect to a wcdma network. What could be happening? I use kaspersky, and I checked the Network Monitor and I don't see any weird process using bandwidth. What other tests can I run? Recommend me some app to monitor the traffic to see if I have some malware or see where my bandwidth goes.

    Read the article

  • Best solution for High Availability and SSRS on SQL Server 2008 R2?

    - by Chandra
    I have 2 Physical Servers with SQL Server 2008 R2. – SQL Server 1(Active) & SQL Server 2 (Passive) Web Application is developed using .Net 4.0 Framework. I want to know the best solution to have high availability and also have SSRS for reporting. Planned solution: Mirroring for Failover, and Transaction Replication for SSRS as the mirrored database can only be used for failover scenarios. SSRS will be on the Passive server, to reduce the load on the Active server. Let me know if the solution is correct. Also suggest alternate approaches.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web

    HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web Microsoft veut augmenter la vitesse du Web et propose à L'IETF, Internet Engineering Task Force, l'organisme chargé de la standardisation de l'internet, des éléments pour le protocole HTTP 2.0. Après Google avec son projet SPDY ayant pour objectif de doubler la vitesse du Web en apportant des ajustements au protocole HTTP par une couche supérieure, c'est au tour de la firme de Redmond de montrer son intérêt pour l'avenir du Web. Dans un billet de blog publié récemment, la firme présente sa proposition ?HTTP Speed+Mobility? qui sera soumise au groupe de travail HTTPbis.

    Read the article

  • 6 Ways to Speed Up Your Ubuntu PC

    - by Chris Hoffman
    Ubuntu is pretty snappy out-of-the-box, but there are some ways to take better advantage of your system’s memory and speed up the boot process. Some of these tips can really speed things up, especially on older hardware. In particular, selecting a lightweight desktop environment and lighter applications can give an older system a new lease on life. That old computer that struggles with Ubuntu’s Unity desktop can provide decent performance for years to come. HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • C or assembly code to find current cpu core speed

    - by honestann
    How can my application efficiently determine the following information peroidically while it executes: 1: current speed of each of the 8 CPU cores. 2: which core the code is currently executing on. My application is C and assembly-language, so any solution in either C or assembly-language is fine. This code needs to execute quickly, so creating, reading and processing a file generated by "cat /proc/cpuinfo" is much too slow. The cores slow-down and speed-up automatically, probably to keep CPU temperature under control. Therefore, a one-time measure is not sufficient for my purposes. My application already reads and subtracts the cpu cycle counter in assembly language to determine number of clock cycles, but my program cannot compute elapsed time in nanoseconds unless it knows the current clock frequency of the cpu cores (and which core the code is executing on). Thanks!

    Read the article

  • Best practices when loading images for improving page loading speed

    - by Naoise Golden
    I am working on optimizing a page's loading speed. Here are some analytics: Notice how the images, although only accounting for 65% of the total size (1.1MB), are by far the slowest loading assets: 96% of time. I'd like to know which are the recommended practices on optimizing loading speed, only taking images into account. Some of the techniques we are already applying: image compression images hosted on cookieless domain and CDN spriting everything that can be sprited http headers: keep alive and Expires to one year. Disclaimer: I have gone through the available documentation, I think by focusing on image loading optimization I am not creating a duplicate or a subjective question.

    Read the article

  • System monitor network speed monitor not working for LAN but works for my Wi-fi

    - by Pavak
    I'm on Ubuntu 13.10. I generally use wi-fi to connect to the internet. But Yesterday my wi-fi router occurred some problem and now it's out for warranty. So temporarily I'm using LAN. System monitor displayed the network speed correctly when I was in wi-fi. But now it's not showing any kinda network speed in System Monitor. I checked the preferences opption but couldn't find a way. I also checked "ksysguard"(KDE's system monitor) and conky. None of them working. How can i solve this? I'm attaching a screenshot to clear the problem.

    Read the article

  • Wired internet speed is slow in Ubuntu 12.04

    - by RameshKatkam
    I have dual boot Windows 7 and Ubuntu 12.04 on my Dell N4110 laptop. I recently installed Ubuntu 12.04 on my existing windows 7 hard drive. I am experiencing very slow internet speed on Ubuntu(wired connection). But on Windows,internet is really fast. I disabled ipv6 by editing /etc/sysctl.conf No avail. Then i run the following command. sudo ethtool -s eth0 speed 100 duplex full autoneg off No benefit. I am having "Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05)" network card.

    Read the article

  • Window 2003 is PHP Limiting my Download Speed?

    - by JohnScout
    Hello, I have window 2003 100mbps server, i have tried using php script such as php indexer, zina pancake.org and others. The php script use to serve download such as images and music songs. I personally have 20mbps internet speed. When i use the php script (download pass thru PHP headers) , it will download at constant speed of 30-40KBps. I have tried different webserver such as apache 1.3, apache 2.2, abyss webserver & lighttpd for windows. The speed while relying on php is same constant 30-40KBps however when i tried direct link/straight from apache, the speed is 1MB/s. Is there any settings in Window 2003 Registry or PHP should i change to make the download speed is more faster when going thru PHP?

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Testing file uploading and downloading speed using FTP

    - by Toman
    Hi all, I am working in a desktop application using java. In my application i have to perform a speed test which will show the file uploading and downloading speed. For uploading test i am uploading a small test file to a FTP server and based on time taken i am calculating the file upload speed. similarly i am downloading a test file form server and calculating download speed. But result i am getting doesn't match with actual FTP file uploading and downloading speed.it seems that the establishing connection to FTP server is increasing the time, hence the resultant speed i am calculating is less. Could you suggest any link or some way to get nearest uploading and downloading speed. i thanks to all your valuable suggestion.

    Read the article

  • Uploading to my local server is slower than downloading from the Internet

    - by Olivier Lalonde
    I have a home Ubuntu server that I use for storage. I have mounted a sftp share on my laptop to access my server but the upload speed I get is very slow (~400kb/s) compared to speeds I usually get when downloading through Bittorrent (~800kb/s). It's kind of weird... I should get higher speeds on a LAN than on the Internet... How can I speed up uploads to my server and how can I troubleshoot where the bottleneck is?

    Read the article

  • last-modified/etags - to include or not?

    - by Kae Verens
    Google's PageSpeed plugin suggests that a website should include Last-Modified and ETag headers: Specify a cache validator "Resources that do not specify a cache validator cannot be refreshed efficiently. Specify a Last-Modified or ETag header to enable cache validation" However, Apache suggests that by not including them at all, we speed up websites by eliminating If-Modified-Since and If-None-Match requests: http://www.askapache.com/htaccess/apache-speed-last-modified.html these are in direct opposition - which should be implemented? I'm leaning towards Apache's suggestion, as when I want a file cached, I don't want it refreshed.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >