Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 6/67 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • how to call the method in thread with aruguments and return some value

    - by ratty
    i like to call the method in thread with aruguments and return some value here example class Program { static void Main() { Stopwatch stop = new Stopwatch(); stop.Start(); Thread FirstThread = new Thread(new ThreadStart(Fun1)); Thread SecondThread = new Thread(new ThreadStart(Fun2)); FirstThread.Start(); SecondThread.Start(); } public static void Fun1() { for (int i = 1; i <= 1000; i++) { Console.WriteLine("Fun1 writes:{0}", i); } } public static void Fun2() { for (int i = 1000; i >= 6; i--) { Console.WriteLine("Fun2 writes:{0}", i); } } } i know this above example run successfully but if method fun1 like this public int fun1(int i) { for (int n = i; n >= i+10; n++) { Console.WriteLine("Fun2 writes:{0}", i); } } then how can i call this in thread. Is it possible .Any body Help for me

    Read the article

  • inserting unique date into txt document

    - by durian
    I'm trying this script to insert only a unique date into a text file, but it isn't working properly: $log_file_name = "logfile.txt"; $log_file_path = "log_files/$id/$log_file_name"; if(file_exists($log_file_path)){ $not = "not"; $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $file_contents = file_get_contents($log_file_path); $file_contents_arry = explode(";",$file_contents); if(!in_array($todaytodaydate,$file_contents_arry)){ $append = fopen($log_file_path, 'a'); $write = fwrite($append,$today); //writes our string to our file. $close = fclose($append); //closes our file } else { $append = fopen($log_file_path, 'a'); $write = fwrite($append,$not); //writes our string to our file. $close = fclose($append); //closes our file } } else{ mkdir("log_files/$id", 0700); $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $create = fopen($log_file_path, "w"); $write = fwrite($create, $today, $strlength); //writes our string to our file. $close = fclose($create); //closes our file } The problem is with the if else statement where it should be written if it's already in the array.

    Read the article

  • How the reading from and writing to sockets are synchronized?

    - by Roman
    We create a socket. On one side of the socket we have a "server" and on another side there is a "client". Both, the server and client, can write to and read from the socket. It is what i understand. I do not understand the following things: If a server reads from the socket, does it see in the socket only those stuff which was written to the socket by the client? I mean if server writes something to the socket and than reads from the socket, will it (server) see in the socket the stuff it (server) wrote there? I hope not. Let's consider the following situation. A client write something to the socket and then it writes something new to the socket and then server reads from the socket. What will the server see there? Only the "new" stuff written by the client or both "new" and "old" one? If a client (or server) writes to the socket, can it see if the written information was received by other side? For example out.println("Hello, Server!") will return true it server received this message.

    Read the article

  • Formatting the output of a custom tool so I can double click an error in Visual Studio and the file opens

    - by Ben Scott
    I've written a command line tool that preprocesses a number of files then compiles them using CodeDom. The tool writes a copyright notice and some progress text to the standard output, then writes any errors from the compilation step using the following format: foreach (var err in results.Errors) { // err is CompilerError var filename = "Path\To\input_file.xprt"; Console.WriteLine(string.Format( "{0} ({1},{2}): {3}{4} ({5})", filename, err.Line, err.Column, err.IsWarning ? "" : "ERROR: ", err.ErrorText, err.ErrorNumber)); } It then writes the number of errors, like "14 errors". This is an example of how the error appears in the console: Path\To\input_file.xrpt (73,28): ERROR: An object reference is required for the non-static field, method, or property 'Some.Object.get' (CS0120) When I run this as a custom tool in VS2008 (by calling it in the post-build event command line of one of my project's assemblies), the errors appear nicely formatted in the Error List, with the correct text in each column. When I roll over the filename the fully qualified path pops up. The line and column are different to the source file because of the preprocessing which is fine. The only thing that stands out is that the Project given in the list is the one that has the post-build event. The problem is that when I double click an error, nothing happens. I would have expected the file to open in the editor. I'm vaugely aware of the Microsoft.VisualStudio.Shell.Interop namespace but I think it should be possible just by writing to the standard output.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Logfiles go blank after logrotate rotates them.

    - by Hilt86
    I have an ubuntu 8.04 LTS server that runs openvpn. The openvpn server writes to a standard logfile under /var/log and prior to a month ago logrotate would automatically rotate the files and compress them. The files are still being rotated however the new logfile (ovpn.log) is empty. Restarting the openvpn daemon fixes the issue (ie: openvpn writes status events to the file) but after about 10 days the file is rotated again openvpn can't write to the logfile again. This is also strange because logrotate is set to rotate every 6 months. Openvpn runs as nobody and the logfiles are owned by root and admin which is strange because it should either work at all times or not work at all if the permissions are the cause, unless openvpn runs as root temporarily and then drops down to nobody after initializing ?

    Read the article

  • OpenWrt logging: how to find out "wifi deauthentication"

    - by user62367
    If someone starts to use the wifi, i can see that with logread: Jan 23 21:04:47 router daemon.info hostapd: wlan0: STA XX:XX:XX:XX:XX:XX IEEE 802.11: authenticated But how can i see, that he/she's disconnecting? Theres no "bla-bla deauthenticated bla" line in logread, or even a thing that points to that someone get's disconnected.. I tried to google: http://wiki.openwrt.org/doc/uci/system But it doesn't writes about loglevel. Can anyone help me find out, how to find out that someone disconnects it's wifi from the router? The logread doesn't even writes a line when someone disconnects. Please help!! It's important! Thank you!:\

    Read the article

  • Speedup vmware esx guest hdd access

    - by Uwe
    Hello, we run several windows servers and windows clients on our vmware esx. One of the Windows 2003 Servers is a build-server with major HDD-reads/writes. as it is our build server. This machine was a hardware before and was virtualized to the ESX. Is there any way to increase the HDD-Performance? Perhaps there are special windows (guest) drivers? The files are stored on a Raid6 base. Performance graph of vmware infrastructure client shows reads up to 650 KBps and writes up to 4000 KBps. Thank you. Regards, Uwe

    Read the article

  • Need script to redirect STDIN & STDOUT to named pipes

    - by user54903
    I have an app that launches an authentication helper (my script) and uses STDIN/STDOUT to communicate. I want to re-direct STDIN and STDOUT from this script to two named pipes for interaction with another program. E.g.: SCRIPT_STDIN pipe1 SCRIPT_STDOUT < pipe2 Here is the flow I'm trying to accomplish: [Application] - Launches helper script, writes to helpers STDIN, reads from helpers STDOUT (example: STDIN:username,password; STDOUT:LOGIN_OK) [Helper Script] - Reads STDIN (data from app), forwards to PIPE1; reads from PIPE2, writes that back to the app on STDOUT [Other Process] - Reads from PIPE1 input, processes and returns results to PIPE2 The cat command can almost do what I want. If there were an option to copy STDIN to STDERR I could make cat do this with a command (assuming the fictitious option -e echos to STDERR rather than STDOUT): cat -e PIPE2 2PIPE1 (read from PIPE2 and write it to STDOUT, copy input, normally going to STDERR to PIPE1)

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • How many times can data be read from a USB flash drive?

    - by John
    While I am aware that performing writes on a USB flash drive degrades the life expectancy of the device. I have heard the quantity of writes is anywhere from 100 thousand to 10 million, but I have not heard about number of read operations. Does reading from the device count toward this total? I am interested in writing only once to a flash drive and setting it to read-only. Then reading files from the device a thousand or more times per day, but am wondering if (at say 1,000 reads per day), the flash drive will need to be replaced within 100 days (assuming a 100,000 r/w cycle lifetime)?

    Read the article

  • Format as NTFS without Journal

    - by palswim
    I have a flash drive that I'd like to format for use in Windows. I would like support for symbolic links, so I can't use FAT/FAT32/exFAT. I would prefer to use the ext4 filesystem and disable journaling, with the Ext2Fsd filesystem driver, but have (so far) found that I can't make soft links across filesystems that Windows will read, Ext2Fsd has an annoying bug about always mounting partitions as read-only and has problems resuming from sleep, and some programs have problems writing to the partition even after manually configuring Ext2Fsd to allow writes. So, I would like to use NTFS for the flash drive, but disable the journaling feature (causes extra writes), if possible. How can I do this?

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Why does iis produce 500 response on all requests?

    - by HMR
    In IIS 7.5 on Windows webserver 2008 I get a 500 response on any request. Checking in firebug if I'm being forwarded but the request is made and no forwarding, just get 500 response with a page that looks like Symfony response page. Disabled all url re writes and requested index.html (still get 500) checked error pages under featured settings and detailed errors is checked. Checked the php error log and no writes have been made for several days. I think someone has fiddled with the IIS server settings and got it to return 500 for any request (even locally) but I'm not able to find what it is. [UPDATE] While I was writing this and listing things I've tried I tried selecting detailed error messages at the root node, not the site node. Looks like some idiot changed file permissions so that IIS can no longer read the config file.

    Read the article

  • IIS's SMTP Pickup timing

    - by fatcat1111
    I have IIS's SMTP server set up as a closed relay, and it's working nicely. I also have an application that writes EML files. If the EML files are written to a temporary directory, then moved to the server's Pickup directory, email is sent as expected. However, if I have the application write the EML files directly to the Pickup directory, the email will often fail to send. This seems to be a race condition: the server starts processing the EML file as soon as it detects it in Pickup, even though the application hasn't completed writing it. The result is the server considers the EML to be malformed, and it punts it to Badmail. While I very much appreciate the server's earnestness, it seems that I need to dial it back a bit for this scenario. Does anybody know if IIS's SMTP server's polling frequency can be configured? I am using IIS7, Windows Server 2008 R2. The application that writes the EML cannot be modified.

    Read the article

  • Battery backed write cache behavior upon disk change

    - by Halfgaar
    We use 3ware Inc 9650SE SATA-II RAID PCIe RAID controllers with battery backed write cache. Our spare hardware has the same controller. I was wondering; are these controllers smart enough not to sync the cache when the disks have been changed? For example, if I deploy one of those spare machines by putting in the disks of another machine and that spare machine still has pending writes, will it be smart enough not to perform those writes on the replaced array? Edit: my scenario is not really made clear, so let me give an example: server1 goes down because of power supply failure. I put the disks in server2 and start. I repair server1 I put the disks back from server2 in server1 (it's not relevant right now that in reality I would probably keep server2 running). If server1 doesn't have safeguards, it will write to the array, thinking it's simply powering up again, corrupting it.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • How do you make Windows 7 fully case-sensitive with respect to the filesystem?

    - by trusktr
    I want to make Windows 7 case-sensitive when it reads/writes anything on the hard drive (the C drive, or any other NTFS drive). I found a video via google that says to change the registry key HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\DontPrettyPath to a value of 1 (source). I also found a Windows support item that says something about modifying the registry key HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\kernel\obcaseinsensitive that leads me to assume putting a value of 0 will make Windows case-sensitive with NTFS filesystems (source). I have a feeling the second solution is the answer, but I'm not sure and I don't want to try it without being sure. Does anyone know for sure what is the correct way to make Windows 7 case-sensitive when it reads/writes to the C drive (and any other NTFS drive)?

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • SQL Server plus small files

    - by user1467163
    I have a MSSQL server, 3 volumes, that runs some processes that seem to take way too long. One of these processes reads in a zip file, then writes to a database based on what's in the zip file.... for each record. I have 2 volumes in use and am creating the third- so I am trying to plan how to do this. OS has to remain on vol. 1. The TLogs should probably go on the new volume and the mdf's on the existing vol.2.. Do I put the file store on the volume with the MDF's so they don't interfere with the TLog writes, or with the TLogs so they don't interfere with the TLog flush to the MDFs? I know it's best to have more servers / volumes but I have to make do with whats on hand for now. I appreciate any suggestions.

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • using one disk as cache for others

    - by HugoRune
    Hi Given a PC with several hard drives: Is it possible to use one fast disk as a giant file cache? I.e. automatically copying frequently accessed data to that one disk, and transparently redirecting reads and writes to that disk, so that other drives would only have be accessed occassionally. (writes would have to be forwarded to the other disks after a while of course) Advantages: the other drives could be powered down most of the time; reducing power, heat, noise speed of the other drives would not matter much. cache disk could be solid state. How can I set such a system up? What OS supports these options? Is this possible at all using Windows or Linux?

    Read the article

  • How to have Excel data validation display different data in drop down than is actually validated

    - by Memitim
    How can I provide a user with a drop-down menu in a cell that displays the contents from one column but actually writes the value from a different column to the cell and validates against the values from that second column? I have a bit of code that very nearly does this (credit: DV0005 from the Contextures site): Private Sub Worksheet_Change(ByVal Target As range) On Error GoTo errHandler If Target.Cells.Count > 1 Then GoTo exitHandler If Target.Column = 10 Then If Target.Value = "" Then GoTo exitHandler Application.EnableEvents = False Target.Value = Worksheets("Measures").range("B1") _ .Offset(Application.WorksheetFunction _ .Match(Target.Value, Worksheets("Measures").range("Measures"), 0) - 1, 1) End If The drop-down displays the values from one column, for example Column B, but when selected actually writes the value on the same row from Column C to the cell. However, data validation is actually validating against Column B, so if I manually enter something from Column C in the cell and try to move to another cell, data validation throws an error.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >