Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 303/1209 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • "Opportunity" to take over maintenance of a small internal website. What should I do?

    - by Dan
    I have been offered an "opportunity" to take over maintenance of a small internal website run by my group that provides information about schedules and photos of events the groups done. My manager sent me the link to the site and checked it out. The site looked clean and neat but loaded in ~5 seconds. I thought this was a little long considering the site really didn't contain a lot of content. This prompted me to take a look under the hood at the pages source code. To my horror it'd been totally hacked together using nested tables! I'm new so I really can't say no to this "opportunity" so what should I do with it? Every fiber of my being feels that the only correct thing to do is over hall the site using CSS, Div's, Span's and any other appropriate tags that a sane/good web developer would used to begin with instead of depending on the render incentive magic of tables. But I'd like to ask programmers with more experienced then me, who have been in this situation. What should I do? Is my only realistic option to leave the horror as is and only adjusting the content as requested? I'm really torn between good development and the corporate reality I'm part of. Is there some kind of middle ground where things can be made better even if they're not perfect? Thanks ahead of time.

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • How can I optimize Apache to use 1GB of RAM on my website? [closed]

    - by Markon
    My VPS plan gives me 1GB of RAM burstable to 2GB. Of course I cannot use 2 GB, nor 1 GB, everyday, so I'm planning to optimize the performance of my webserver. The average of hits-per-hour is about 8'000-10'000. This means about 2 connections-per-second. Max hits-per-hour reached until now is about 60'000. That means about 16 connections-per-second. Unluckily my current apache configuration uses too much memory (when there are not connected clients - usually during the night - it uses about 1GB) so I've tried to customize the apache installation to fit to my needs. I'm using Ubuntu, kernel 2.6.18, with apache2-mpm-worker, since I've read it requires less memory, and fcgid ( + PHP). This is my /etc/apache2/apache2.conf: Timeout 45 KeepAlive on MaxKeepAliveRequests 100 KeepAliveTimeout 10 <IfModule mpm_worker_module> StartServer 2 MinSpareThreads 25 MaxSpareThreads 75 MaxClients 100 MaxRequestsPerChild 0 </IfModule> This is the output of ps aux: www-data 9547 0.0 0.3 423828 7268 ? Sl 20:09 0:00 /usr/sbin/apache2 -k start root 17714 0.0 0.1 76496 3712 ? Ss Feb05 0:00 /usr/sbin/apache2 -k start www-data 17716 0.0 0.0 75560 2048 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 17746 0.0 0.1 76228 2384 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 20126 0.0 0.3 424852 7588 ? Sl 19:24 0:02 /usr/sbin/apache2 -k start www-data 24260 0.0 0.3 424852 7580 ? Sl 19:42 0:01 /usr/sbin/apache2 -k start while this is ps aux for php5: www-data 7461 2.9 2.2 142172 47048 ? S 19:39 1:39 /usr/lib/cgi-bin/php5 www-data 23845 1.3 1.7 135744 35948 ? S 20:17 0:15 /usr/lib/cgi-bin/php5 www-data 23900 2.0 1.7 136692 36760 ? S 20:17 0:22 /usr/lib/cgi-bin/php5 www-data 27907 2.0 2.0 142272 43432 ? S 20:00 0:43 /usr/lib/cgi-bin/php5 www-data 27909 2.5 1.9 138092 40036 ? S 20:00 0:53 /usr/lib/cgi-bin/php5 www-data 27993 2.4 2.2 142336 47192 ? S 20:01 0:50 /usr/lib/cgi-bin/php5 www-data 27999 1.8 1.4 135932 31100 ? S 20:01 0:38 /usr/lib/cgi-bin/php5 www-data 28230 2.6 1.9 143436 39956 ? S 20:01 0:54 /usr/lib/cgi-bin/php5 www-data 30708 3.1 2.2 142508 46528 ? S 19:44 1:38 /usr/lib/cgi-bin/php5 As you can see it use a lot of memory. How can I reduce it to fit to just 1GB of RAM? PS: I also think about the switch to nginx, if Apache can't fit to my needs...

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • Why are my backlinks not showing on google on this asp.net website with all I've done?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • Which is the best image hosting site for hosting images for website? [closed]

    - by rahul dagli
    I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Melhoria de Performance no .NET 4.5: Multicore Just-in-Time (JIT).

    - by anobre
    Olá pessoal! Dando uma lida nas melhorias de performance da plataforma .NET 4.5, me deparei com algo extremamente interessante: Multicore Just-in-Time (JIT). A teoria é muito simples: por que não utilizar vários núcleos para a compilação JIT? Além disto, será que seria possível compilar os métodos em uma determinada ordem, onde os primeiros fossem aqueles com maior probabilidade de execução? Isto parece meio loucura mas é o que o Multicore Just-in-Time (JIT) faz. E o melhor de tudo, de uma forma extremamente simples. As aplicações ASP.NET 4.5 já o fazem por default. Em outras ocasiões, basta executar duas linhas de código: uma indicando a pasta onde o arquivo que armazenará o profile ficará, e a outra para iniciar o procedimento. Este profile é o arquivo responsável por armazenar a ordem de compilação dos métodos, para que aqueles com maior chance de serem executados mais cedo sejam compilados antes. Código para este processo: ProfileOptimization.SetProfileRoot(@"C:\ProfileRoot"); ProfileOptimization.StartProfile("profile"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Esta otimização na compilação só será notada após a criação do profile. Portanto, na primeira vez nada será percebido. Ao final do processo, um arquivo com o nome escolhido (no caso profile) será criado, na pasta indicada como root: Fica a dica! Abraços!

    Read the article

  • Is there a website that scrapes job postings to determine the popularity of web technologies? [closed]

    - by dB'
    I'm often in a position where I need to choose between a number of web technologies. These technologies might be programming languages, or web application frameworks, or types of databases, or some other kind of toolkit used by programmers. More often than not, after some doing research, I end up with a list of contenders that are all equally viable. They're all powerful enough to solve my problem, they're all popular and well supported, and they're all equally familiar/unfamiliar to me. There's no obvious rationale by which to choose between them. Still, I need to pick one, so at this point I usually ask myself a hypothetical question: which one of these technologies, if I invest in learning it, would be most helpful to me in a job search? Where can I go on the internet to answer this question? Is there a website/service that scrapes the texts of worldwide job postings and would allow me to compare, say, the number of employers looking for expertise in technology x vs. technology y? (Where x and y are Rails vs. Djando, Java vs. Python, Brainfuck vs. LOLCode, etc.)

    Read the article

  • How can I use Performance Counters in C# to monitor 4 processes with the same name?

    - by Waffles
    I'm trying to create a performance counter that can monitor the performance time of applications, one of which is Google Chrome. However, I notice that the performance time I get for chrome is unnaturally low - I look under the task-manager to realize my problem that chrome has more than one process running under the exact same name, but each process has a different working set size and thus(what I would believe) different processor times. I tried doing this: // get all processes running under the same name, and make a performance counter // for each one. Process[] toImport = Process.GetProcessesByName("chrome"); instances = new PerformanceCounter[toImport.Length]; for (int i = 0; i < instances.Length; i++) { PerformanceCounter toPopulate = new PerformanceCounter ("Process", "% Processor Time", toImport[i].ProcessName, true); //Console.WriteLine(toImport[i].ProcessName + "#" + i); instances[i] = toPopulate; } But that doesn't seem to work at all - I just monitor the same process several times over. Can anyone tell me of a way to monitor separate processes with the same name?

    Read the article

  • What do you do when a client requires Rich Text Editing on their website?

    - by George Stocker
    As we all know by now, XSS attacks are dangerous and really easy to pull off. Various frameworks make it easy to encode HTML, like ASP.NET MVC does: <%= Html.Encode("string"); %> But what happens when your client requires that they be able to upload their content directly from a Microsoft Word document? Here's the scenario: People can copy and paste content from Microsoft word into a WYSIWYG editor (in this case tinyMCE), and then that information is posted to a web page. The website is public, but only members of that organization will have access to post information to a webpage. What is the best way to handle this requirement? Currently there is no checking done on what the client posts (since only 'trusted' users can post), but I'm not particularly happy with that and would like to lock it down further in case an account is hacked. The platform in question is ASP.NET MVC. The only conceptual method that I'm aware of that meets these requirements is to whitelist HTML tags and let those pass through. Is there another way? If not, is the best way to let them store it in the Database in any form, but only display it properly encoded and stripped of bad tags? NB: The questions differ in that he only assumes there's one way. I'm also asking the following questions: 1. Is there a better way that doesn't rely on HTML Whitelists? 2. Is there a better way that relies on a different view engine? 3. Is there a WYSIWYG editor that includes the ability to whitelist on the fly? 4. Should I even worry about this since it will only be for 'private posting' (Much in the same way that a private blog allows HTML From the author, but since only he can post, it's not an issue)? Edit #2: If suggesting a WYSIWYG editor, it must be free (as in speech, or as in beer). Update: All of the suggestions thus far revolve around a specific Rich Text Editor to use: Only provide an editor as a suggestion if it allows for sanitization of HTML tags; and it fulfills the requirement of accepting pasted documents from a WYSIWYG Editor like Microsoft Word. There are three methods that I know of: 1. Not allow HTML. 2. Allow HTML, but sanitize it 3. Find a Rich Text Editor that sanitizes and allows HTML. The previous questions remain (1-4 above). Related Question Preventing Cross Site Scripting (XSS)

    Read the article

  • How to display values from another website to an new html page?

    - by user3098728
    How to display the value in a new html file from different website? This an example field of values that need to display into new html file and I want to display the said values in the input box (Contract ID) of this page JSFiddle. I have 2 JS code that would display that values, but unfortunately its not working and I dont know how to display that value in html input box. Please help me. Thank you I want to display the said value in this input box: Here the JS file to read the values: function scanLapVerification() { try { var page_title = "Title"; var el = getElement(document, "class", "view-operator-verification-title", ""); if (!el || el.length == 0) return; if (el[0].innerText != page_title) return; var page_title = ''; var el = getElement(document, "class", "workflowActivityDetailPanel", ""); if (el && el.length > 0) { var eltr = getElement(el[0], "tag", "tr", ""); if (eltr && eltr.length > 0) { //Read Contract ID var contractId = { CI: { id: null } }; var con_id = null; for (var i = 0; i < eltr.length; i++) { tr_text = eltr[i].innerText; if (tr_text.substr(0, "Contract ID".length) == "Contract ID") con_id = "CI"; if (con_id && tr_text.substr(0, "Contract ID".length) == "Contract ID") { contractId[con_id].id = tr_text.substr("Contract ID".length + 1, tr_text.length - "Contract ID".length - 1); } } var contract_id = contractId.CI.id; return { content: "cid_check", con_id: con_id }; } return { status: "KO" }; } catch (e) { alert("Exception: scanLapVerification\n" + e.Description); return { status: "KO", message: e }; } }; And here's the 2nd JS that display to a new html page: function scanLapVerification() { chrome.tabs.sendRequest(tabLapVerification, { method: "scanLapVerification" }, function (response) { msgbox("receiveResponse: scanLapVerification " + jsonToString(response, "JSON")); //maintaining state in the background if (response.data.content == "cid_check") { //Popup window features var popupWindow = null; var name; var width = 550; var height = 200; var left = parseInt((screen.availWidth / 2) - (width / 2)); var top = parseInt((screen.availHeight / 2) - (height / 2)); var windowFeatures = "width=" + width + ",height=" + height + ",left=" + left + ",top=" + top + "screenX=" + left + ",screenY=" + top; //Input new address with popup window if (confirm("Does the client has new address?") == true) { popupWindow = window.open('/htmlname.htm', "title", windowFeatures + encodeURIComponent(response.data.contract_id)); popupWindow.focus(); } else { name = ""; } }); }

    Read the article

  • P4 vs. i3/i5 *T in power consumption and performance [migrated]

    - by Walter Zomb
    I am running an Intel P4 prescott with HT on my home server (linux file server on encrypted disks on software-RAID5 and virtualisation host for three further machines). The performance for this purpose is really okay. When the system is idle it consumes about 140W power. I am considering buying a new mainboard for an e.g. Intel i3-2100T or an Intel i5-2390T. Both are low power CPUs with a TDP about 40W. Has anyone experiences how much power a recent mainboard with one of these CPUs an 3-4 'green-energy' disks (6W each) consumes? Do I get underneath the 100W threshold? What's about the performance of these low power CPUs? Are they comparable to an Intel P4 with HT? regards, walter

    Read the article

  • SQL Azure Federation - how much data before performance benefits?

    - by Donald Hughes
    To avoid premature optimization, I don't want to implement SQL Azure's Federation too early. Is there a rule of thumb for how much data a table would need to have before seeing performance benefits from sharding? I know there won't be a precise answer as there are too many variables to consider, especially with much of SQL Azure's resources being hidden/unknown. To put it into several, more concrete examples, would Federation improve performance in any of the below table scenarios: 100,000 rows (~ 200 MB) 1,000,000 rows (~ 2 GB) 10,000,000 rows (~ 20 GB) 100,000,000 rows (~ 200 GB) For the sake of elaboration, we can assume this is the largest table that would be federated, which consists of order details, which is joined to an orders table with a 'customer_id' foreign key, which would be the distribution key. This is a fairly standard multi-tenant, CRUD order entry system, with a typical assortment of reporting needs (customer order totals by day/month/year, etc).

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • VMware vSphere 4.1: host performance graphs show "No data available", except the realtime view, which works fine

    - by Graeme Donaldson
    Here's our scenario: Site 1 has 3 hosts, and our vCenter server is here. Site 2 has 3 hosts. All hosts are ESXi 4.1 update 1. If I view the Performance tab for any host in Site 1, I can view realtime, 1 Day, etc., i.e. all the views give me graph data. For the hosts in Site 2, I can view the realtime graphs, 1 Day and 1 Week both say "No data available". 1 Month had mostly nothing, 1 Year shows that it was working fine for a long time and then started breaking. 1 Month view: 1 Year view: What would cause this loss of performance data?

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • What we have to measure for measuring server performance If we can't measure the server processing time from client side?

    - by AsadYarKhan
    If we can not measure the server processing time from client side then which attributes will be good to measure in client side for measuring server side performance and What attributes are important ? I know we can get the server response time, latency and Throughput etc,but how do we understand/interpret the result of server side from these attrubutes. How can we analyse that whether my code is taking lots of time,whether Web Server, whether it is because of Server Machine(H/W).how would i know that which thing needs to be upgrade or improve.Please tell me any article or any book something that I need to study or explain here If you can so I can interpret the result of server side using these attributes response time, latency and throughput.You can tell other performance attribute if I need to understand the server result.

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • Can a website see/know my MAC address even if I use a VPN?

    - by ilhan
    I have searched other results and read many of them but I could not get an enough information. My question is that can a website see my MAC address or can they have an information about that I'm the same person under these conditions: I am using a VPN and I use two IPs: first one is normal one, the second one is the VPN's IP. I use two browsers to hide behind browser fingerprinting. I use both browsers with Incognito Mode. I always use one for normal IP, one for the VPN IP. I do not know that if the website uses cookies or not. But can they collect an enough information to prove that these two identities belong to same person? Is there any other way for them to see that I am the same person? I use different IPs, different browsers and I use both browsers in incognito mode. I even changed one of browsers language to only English. So even if they collect my info from browser, they will see two browsers using different languages. (Addition after edit): So I have changed my IP and browser information and the website can not reach this information anymore to prove that I am the same person using two accounts. Then let's come to the title: Can they see my MAC address? Because I think that it is the last way that they can identify me and my main question is that. I wrote the information above to mention that I changed IPs and I have some precautions to avoid browser fingerprinting (btw my VPN provider already has a service about blocking it). I wrote them because I read similar advices in some related questions but my question is that can they see my MAC address (or anything else that can make me detected) despite all these precautions. And lastly, Is there an extra way to be anonymized that I can do? For example, can my system clock or anything else give an information? Thanks in advance.

    Read the article

  • What factors can affect performance of Http Server written in C-Sharp? [on hold]

    - by Yousaf
    I am having trouble in terms of handling huge databases. I have multiple clients like 100-300 (clients are basically servers with i.e windows sql). Each client may have 38 thousand rows/listing of data, each row has 10-12 fields. I cannot afford to have json files of each client and than handle them on main server, because of memory issue. What if i have http server written in c or c# installed on clients and they return 250 rows in each response to the main server. How the factors like speed, memory or other issues can effect us ? What exactly I am asking for ? In short words if a server writter in c-sharp sends 250 rows per request. What factors can effect the performance of server ? for example. Speed, processing, Operating system, Implementation of algorithm of server ? How these factors can really effect the performance on large scale?

    Read the article

  • Performance impact of running Linux in a virtual machine in Windows?

    - by vovick
    Hello, I'd like to know what performance impact I could expect running Linux in a virtual machine in Windows. The job I need Linux for is heavy and almost non-stop code compilation with GCC. Dual-boot doesn't look like a very attractive solution, so I'm counting on low VM overhead right now (10-20% would be fine for me, but 50% or more will be unacceptable). Did anyone try to measure the performance difference, are there any comparison tables? What virtual machine with the lowest overhead possible will you suggest? My host OS is Win7 and I've got a modern Core i7 with VT-x present. Thanks!

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >