Search Results

Search found 4215 results on 169 pages for 'andrew price'.

Page 148/169 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • When to use Truecrypt, and when not to?

    - by tm77
    I have about 30 (this number will most likely grow over the next few years to 50 or more) unencrypted laptops that I have been tasked to encrypt (entire drive). These machines will be used off site regularly by my users. These machines are running Windows 7 and XP (about 50/50), but more Windows 7 every month. I have experience with Truecrypt, and have had no issues. It appears to be THE solution for a free solution. My concern with Truecrypt is that my users will have 2 passswords needed to login to their machines. Also, I need to choose to either have 1 password for my organization, or carefully document each machine's password (management nightmare). In my mind, choosing between a managed and a free encryption solution is primarily based on the NUMBER of machines that will be encrypted and supported. Two questions: From a management standpoint, what is the tipping point of users where a managed solution would pay for itself over Truecrypt? What are some good third party solutions? (I will consider Bitlocker, but the price to upgrade Windows 7 licenses is a turn-off) I would love to hear from some admins with experience in supporting encrypted machines in a corporate environment. Many thanks in advance!

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • 3 Ways to Make Steam Even Faster

    - by Chris Hoffman
    Have you ever noticed how slow Steam’s built-in web browser can be? Do you struggle with slow download speeds? Or is Steam just slow in general? These tips will help you speed it up. Steam isn’t a game itself, so there are no 3D settings to change to achieve maximum performance. But there are some things you can do to speed it up dramatically. Speed Up the Steam Web Browser Steam’s built-in web browser — used in both the Steam store and in Steam’s in-game overlay to provide a web browser you can quickly use within games – can be frustratingly slow on many systems. Rather than the typical speed we’ve come to expect from Chrome, Firefox, or even Internet Explorer, Steam seems to struggle. When you click a link or go to a new page, there’s a noticeable delay before the new page appears — something that doesn’t happen in desktop browsers. Many people seem to have made peace with this slowness, accepting that Steam’s built-in browser is just bad. However, there’s a trick that will eliminate this delay on many systems and make the Steam web browser fast. This problem seems to arise from an incompatibility with the Automatically Detect Proxy Settings option, which is enabled by default on Windows. This is a compatibility option that very few people should actually need, so it’s safe to disable it. To disable this option, open the Internet Options dialog — press the Windows key to access the Start menu or Start screen, type Internet Options, and click the Internet Options shortcut. Select the Connections tab in the Internet Options window and click the LAN settings button. Uncheck the Automatically detect settings option here, then click OK to save your settings. If you experienced a significant delay every time a web page loaded in Steam’s web browser, it should now be gone. In the unlikely event that you encounter some sort of problem with your network connection, you could always re-enable this option. Increase Steam’s Game Download Speed Steam attempts to automatically select the nearest download server to your location. However, it may not always select the ideal download server. Or, in the case of high-traffic events like big seasonal sales and huge game launches, you may benefit from selecting a less-congested server. To do this, open Steam’s settings by clicking the Steam menu in Steam and selecting Settings. Click over to the Downloads tab and select the closest download server from the Download Region box. You should also ensure that Steam’s download bandwidth isn’t limited from here. You may want to restart Steam and see if your download speeds improve after changing this setting. In some cases, the closest server might not be the fastest. One a bit farther away could be faster if your local server is more congested, for example. Steam once provided information about content server load, which allowed you to select a regional server that wasn’t under high-load, but this information no longer seems to be available. Steam still provides a page that shows you the amount of download activity happening in different regions, including statistics about the difference in download speeds in different US states, but this information isn’t as useful. Accelerate Steam and Your Games One way to speed up all your games — and Steam itself —  is by getting a solid-state drive and installing Steam to it. Steam allows you to easily move your Steam folder — at C:\Program Files (x86)\Steam by default — to another hard drive. Just move it like you would any other folder. You can then launch the Steam.exe program as if you had never moved Steam’s files. Steam also allows you to configure multiple game library folders. This means that you can set up a Steam library folder on a solid-state drive and one on your larger magnetic hard drive. Install your most frequently played games to the solid-state drive for maximum speed and your less frequently played ones to the slower magnetic hard drive to save SSD space. To set up additional library folders, open Steam’s Settings window and click the Downloads tab. You’ll find the Steam Library Folders option here. Click the Add Library Folder button and create a new game library on another hard drive. When you install a game in Steam, you’ll be asked which library folder you want to install it to. With the proxy compatibility option disabled, the correct download server chosen, and Steam installed to a fast SSD, it should be a speed demon. There’s not much more you can do to speed up Steam, short of upgrading other hardware like your computer’s CPU. Image Credit: Andrew Nash on Flickr     

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Building a Media Center PC with Comcast Cable...?

    - by Rob
    Alright - so this might be a stupid question but I've never been all that much into TV. I currently have Comcast cable. I've just got the 'basic' 2-60 package or whatever; I've just always plugged the cable into the back of my TV. I've never had a cable box. Recently, Comcast has been pulling channels off of my line-up. Most recently, the stole the TV Guide channel from me. I'm told this is part of a push to get customers to switch to their digital line-up. But, I'm also told it requires some sort of digital receiver for each TV you've got. I don't want to buy a bunch of these digital receivers and I don't want to pay the monthly rental fee...but I have heard of how awesome media center PCs are and some really cool things they can do. And, I've got loads of PC parts sitting around. So, can someone guide me through this a bit? Are there computer video cards or TV tuners that are going to work with Comcast's digital cable? What kind of price range are we looking at?

    Read the article

  • The best LCD monitors for reading text?

    - by Xeoncross
    I have been using an 19" Acer AL1916A B for several years now. While possibly failing in other areas - the text was incredibly sharp. Which is very important for someone like me that spends all day writing code. My eyes are very finely tuned and I can see refresh rates and even the smallest pixel overflows from anti-aliasing. Unfortunately it finally died. I then tried a 19" widescreen Acer X193w+ and found that the text was much less sharp. I also tried a 19" widescreen Samsung 920nw and was also disappointed. (by the way, widescreen is a great invention for companies - the same price for less screen!). I am looking for a couple of options of LCD's that hands-down render text ultra sharp and clear. This isn't subjective - an LCD either has sharp text or it doesn't. Anyone with delicate eyes can see the difference and knows what I'm talking about. Please also bare in mind that you're vision can adjust to a given screen; rendering your judgment biased if you do not constantly use other monitors also. If you use windows with ClearType enabled please do not reply.

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Will Intel be releasing anymore 6-core processors soon?

    - by jasondavis
    I am about to start buying parts every week for as long as it takes me to build the best PC I can build. I am looking at the Intel i7-920 processor right now because it is about 250$ and it is a quad-core processor based on the x58 chipset I believe. From what I have read so far, intel is coming out with some 6-core processors soon that will also use the x58 chipset and will allow me to use the same motherboard and memory/ram to upgrade to a 6-core. This sounds really good to me right now. I just read that the new 6-core processor. The Core i7-980X (extreme edition) was just released which is the first 6-core processor but it is supposed to be around $1,000 so I will probably just get the i7-920 for now and then upgrade to the 6-core version when the price goes down. The motherboard I am looking at getting the GIGABYTE GA-X58A-UD5 which is around $280 at newegg.com So that is my basic plan SO far. I have not purchased any parts yet. I am just wanting to ask if this sounds like a good idea or if I should wait longer if I am wanting to eventually have a 6-core processor. Does anyone know if Intel is planning on releasing any other 6-core processor in addition to the Core i7-980X in the near future? I just want to make sure I am buying the best setup for my money if I am going all out on it, thanks for any tips/advice.

    Read the article

  • AMD 700, 800 series chipset. I'm lost.

    - by Shiki
    I've been an Intel / NVidia user ever since I started using computers. Intel really gone up with the prices, and they won't get cheaper. So I decided to get an AMD. But WHICH one? I mean.. not shopping question but.. what are the differences? Like: 880GMA comes only with a single PCI ex and it looks like a chinese replica (no offense). While 890FX comes with 5PCI-ex for QuadCrossfire. Also.. what's the deal with 7xx series? I mean.. its the same price. Yet its older? Or why is it 7xx? Isn't there a single chipset between? Not chinese YET it's durable/fine for long-term usage? What it should know (desktop stuff): NVidia GPU (Zalman AMP2 GTX 260^2 (one card)) Phenom 1090T cpu A somewhat good audio. Any ideas which is the chipset I'm searching for? If this sounds too much of a shopping question, feel free to edit. I just want some clarification on these chipsets.

    Read the article

  • Wildcards!

    - by Tim Dexter
    Yes, its been a while, Im sorry, mumble, mumble ... no excuses. Well other than its been, as my son would say 'hecka busy.' On a brighter note I see Kan has been posting some cool stuff in my absence, long may he continue! I received a question today asking about using a wildcard in a template, something like: <?if:INVOICE = 'MLP*'?> where * is the wildcard Well that particular try does not work but you can do it without building your own wildcard function. XSL, the underpinning language of the RTF templates, has some useful string functions - you can find them listed here. I used the starts-with function to achieve a simple wildcard scenario but the contains can be used in conjunction with some of the others to build something more sophisticated. Assume I have a a list of friends and the amounts of money they owe me ... Im very generous and my interest rates a pretty competitive :0) <ROWSET> <ROW> <NAME>Andy</NAME> <AMT>100</AMT> </ROW> <ROW> <NAME>Andrew</NAME> <AMT>60</AMT> </ROW> <ROW> <NAME>Aaron</NAME> <AMT>50</AMT> </ROW> <ROW> <NAME>Alice</NAME> <AMT>40</AMT> </ROW> <ROW> <NAME>Bob</NAME> <AMT>10</AMT> </ROW> <ROW> <NAME>Bill</NAME> <AMT>100</AMT> </ROW> Now, listing my friends is easy enough <for-each:ROW> <NAME> <AMT> <end for-each> but lets say I just want to see all my friends beginning with 'A'. To do that I can use an XPATH expression to filter the data and tack it on to the for-each expression. This is more efficient that using an 'if' statement just inside the for-each. <?for-each:ROW[starts-with(NAME,'A')]?> will find me all the A's. The square braces denote the start of the XPATH expression. starts-with is the function Im calling and Im passing the value I want to check i.e. NAME and the string Im looking for. Just substitute in the characters you are looking for. You can of course use the function in a if statement too. <?if:starts-with(NAME,'A')?><?attribute@incontext:color;'red'?><?end if?> Notice I removed the square braces, this will highlight text red if the name begins with an 'A' You can even use the function to do conditional calculations: <?sum (AMT[starts-with(../NAME,'A')])?> Sum only the amounts where the name begins with an 'A' Notice the square braces are back, its a function we want to apply to the AMT field. Also notice that we need to use ../NAME. The AMT and NAME elements are at the same level in the tree, so when we are at the AMT level we need the ../ to go up a level to then come back down to test the NAME value. I have built out the above functions in a sample template here. Huge prizes for the first person to come up with a 'true' wildcard solution i.e. if NAME like '*im*exter* demand cash now!

    Read the article

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Somewhat powerful server needed for computationally expensive stuff

    - by Dane Larsen
    So here's my problem. My Dad runs a company that does some rather computationally expensive stuff. This is not supercomputer level stuff, but it does take several hours to run the average job on his Core i7 desktop. He asked me to look into a way to have his customers use the code on an hourly basis, namely via a server. Ideally he'd be able to buy a box for about $1000, and hook it right up to our home connection. Unfortunately, the data that needs to be both sent and received is on the order of several hundred megs. We live in a rural area, and the fastest connection offered is 1.5Mbit/s. Download. It's like .3Mbit/s upload. Not workable. What are the options for this kind of thing? Ideally, we'd have about 2GB of ram, 300-500GB of storage, and a nice dual core, and it has to run some flavor of Linux. Any suggestions? Thanks in advance EDIT: Also, ideally the monthly price would be < $100 per month.

    Read the article

  • Application for time and projet management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Walmart and Fusion Apps

    - by ultan o'broin
    Photograph: Misha Vaughan I attended Fusion Apps (yes, I know I am supposed to say "Oracle Fusion Applications", but stuffy old style guides are a turn-off in interwebs conversations) User Experience Advocate (FXA) training in Long Beach, California last week; a suitable location as ODTUG KSCOPE 11 was kicking off and key players were in the area. As a member of Oracle's Apps-UX team I know the Fusion Apps messaging, natch, and done some other Fusion Apps go-to-market content work too. For the messaging details themselves, see Lonneke Dikmans (@lonnekedikmans) great blog, by the way. However, I wanted some 'formal' training combined with the opportunity to meet and learn from people already out there delivering those messages. The idea in me reaching out to Misha Vaughan, Apps-UX FXA maven, to get me onto this training was that in addition to my UX knowledge, I could leverage my location in EMEA and hit up customer events more quickly and easily. Those local user groups do like to hear the voice of locals too you know (so I need to work on that mid-Atlantic accent). I'm looking forward to such opportunities. The training was all smashing stuff, just the right level of detail, delivered professionally and with great style and humor. I was especially honored to be paired off for my er, coaching with Debra Lilley (@debralilley), who shared with everyone all kinds of tips and insights from her experiences of delivering the message and demo. For me, that was the real power of the FXA event--the communal, conversational aspect--the meeting up with people who had done all this for real, the sharing in their experiences, while learning along with other newbies. Sorry, but that all-important social aspect doesn't work so well with remote meetings. Katie Candland (Apps-UX) gave us a great tour of the Fusion Apps demo and included some useful presentational tips too (any excuse to buy that iPad). It's clear to me that the Fusion Apps messaging and demos really come alive with real-world examples that local application users will recognize, and I picked up some "yes, that's my job made easier" scene-stealers from Debra and Karen Brownfield too, to add to the great ones already provided. This power of examples shouldn't surprise anyone, they've long been a mainstay of applications user assistance, popular with users. We'll offer customers different types of example topics in the Fusion Apps online help too (stay tuned), and we know from research how important those 3S's (stories, scenarios, and simulations) are to users when they consume and apply information. Well, we've got the simulation, now it's time for more stories and scenarios. If you get a chance to participate in an FXA event (whether you are an Oracle employee or otherwise), I'd encourage it. It's committing your time and energy for sure, but I got real bang for the buck from it for my everyday job too. Listening to the room's feedback on the application demo really brought our internal design work to life, and I picked up on some things that I need to follow up on (like how you alphabetically sort stuff in other languages). User experience is after all, about users. What will I be doing next, and what would I like to see happen? Obviously, I need to develop my story-telling links with the people I met in Long Beach and do some practicing with the materials, and then get out there and deliver them at a suitable location. The demo is what it is right now, and that's a super-rich demo that I know everyone will want to see and ask questions about. Then, as mentioned by attendees at the FXA event, follow up on those translated and localized messages for EMEA (and APAC), that deal with different statutory or reporting requirements of the target markets. Given my background I would say that, wouldn't I? However, language is part of the UX, and international revenue is greater than US-only revenue for Oracle, so yes dear, we all need to get over the fact that enterprise apps users don't all speak, or want to speak, American-English. Most importantly perhaps, the continued development of a strong messaging community between Oracle and partners and customers where we can swap and share those FXA messaging stories and scenarios about Fusion Apps in a conversational way. The more the better, a combination of online and face-to-face meetings. I must also mention the great dinner after the event at Parker's Lighthouse, and the fun myself and Andrew Gilmour (Apps-UX) had at our end of the table talking about just about everything except Fusion Apps with Ronald Van Luttikhuizen and Ben Prusinski (who now understands the difference between Cork and Dublin people. I hope). Thanks to all the Apps-UXers who helped bring the FXA training to town, and to Debra and all the others that I am too jetlagged to mention right who were instrumental in making it happen for me. Here's to the next one. And the Walmart angle? That was me doing my Robert Scoble (ScO'bilizer?)-style guerilla smart phone research in Walmart in Long Beach, before the FXA event. It's all about stories for me. You can read more about it on the appslab blog (see the comments).

    Read the article

  • Building a PC, advice on SSD/Hybrid Hard Drives

    - by Jamie Hartnoll
    I am looking at building a new PC, it's mainly for office (graphics heavy) use and programming. Looking for good performance with opening and closing programs and files as well as a fast boot. I plan to have 3 primary hard drives Windows 7 Programs (photoshop etc) Current Files (There'll also be a large storage capacity back up drive, but this will be the Seagate drive I already have.) So, my question is, looking at standard "old fashioned" hard drives and SSD drives, obviously there's a massive price difference. I have been looking at drives like this: http://www.ebuyer.com/268693-corsair-120gb-force-3-ssd-cssd-f120gb3-bk-cssd-f120gb3-bk and this: http://www.ebuyer.com/321969-momentus-xt-750gb-sata-2-5in-7200rpm-hybrid-8gb-ssd-in-st750lx003 Having no experience of using either I don't know what's the most efficient thing to go for. Clearly the SSD will have better performance, but: If, for example, I had an SSD for Windows (say about 100gB), that would clearly give me the boot speed I want, then I guess my real questions are: If I were to buy one more SSD, would it give the greatest improvement on standard performance if used to store programs, or currently used files? Given that the OS is on an SSD, should I not bother with the 3 drives and instead, partition that Hybrid drive to store programs and currently used files on it? Obviously, option two is cheaper and option one could cause me storage issues, but that's when I can dump files I am not currently using onto another drive. Any, I am open to suggestions... so what do you suggest?!

    Read the article

  • Announcing Solaris Technical Track at NLUUG Spring Conference on Operating Systems

    - by user9135656
    The Netherlands Unix Users Group (NLUUG) is hosting a full-day technical Solaris track during its spring 2012 conference. The official announcement page, including registration information can be found at the conference page.This year, the NLUUG spring conference focuses on the base of every computing platform; the Operating System. Hot topics like Cloud Computing and Virtualization; the massive adoption of mobile devices that have their special needs in the OS they run but that at the same time put the challenge of massive scalability onto the internet; the upspring of multi-core and multi-threaded chips..., all these developments cause the Operating System to still be a very interesting area where all kinds of innovations have taken and are taking place.The conference will focus specifically on: Linux, BSD Unix, AIX, Windows and Solaris. The keynote speech will be delivered by John 'maddog' Hall, infamous promotor and supporter of UNIX-based Operating Systems. He will talk the audience through several decades of Operating Systems developments, and share many stories untold so far. To make the conference even more interesting, a variety of talks is offered in 5 parallel tracks, covering new developments in and  also collaboration  between Linux, the BSD's, AIX, Solaris and Windows. The full-day Solaris technical track covers all innovations that have been delivered in Oracle Solaris 11. Deeply technically-skilled presenters will talk on a variety of topics. Each topic will first be introduced at a basic level, enabling visitors to attend to the presentations individually. Attending to the full day will give the audience a comprehensive overview as well as more in-depth understanding of the most important new features in Solaris 11.NLUUG Spring Conference details:* Date and time:        When : April 11 2012        Start: 09:15 (doors open: 8:30)        End  : 17:00, (drinks and snacks served afterwards)* Venue:        Nieuwegein Business Center        Blokhoeve 1             3438 LC Nieuwegein              The Nederlands          Tel     : +31 (0)30 - 602 69 00        Fax     : +31 (0)30 - 602 69 01        Email   : [email protected]        Route   : description - (PDF, Dutch only)* Conference abstracts and speaker info can be found here.* Agenda for the Solaris track: Note: talks will be in English unless marked with 'NL'.1.      Insights to Solaris 11         Joerg Moellenkamp - Solaris Technical Specialist         Oracle Germany2.      Lifecycle management with Oracle Solaris 11         Detlef Drewanz - Solaris Technical Specialist         Oracle Germany3.      Solaris 11 Networking - Crossbow Project        Andrew Gabriel - Solaris Technical Specialist        Oracle UK4.      ZFS: Data Integrity and Security         Darren Moffat - Senior Principal Engineer, Solaris Engineering         Oracle UK5.      Solaris 11 Zones and Immutable Zones (NL)         Casper Dik - Senior Staff Engineer, Software Platforms         Oracle NL6.      Experiencing Solaris 11 (NL)         Patrick Ale - UNIX Technical Specialist         UPC Broadband, NLTalks are 45 minutes each.There will be a "Solaris Meeting point" during the conference where people can meet-up, chat with the speakers and with fellow Solaris enthousiasts, and where live demos or other hands-on experiences can be shared.The official announcement page, including registration information can be found at the conference page on the NLUUG website. This site also has a complete list of all abstracts for all talks.Please register on the NLUUG website.

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • Looking for "bitmap-vector" image editor

    - by Borek
    I used to use PhotoImpact which is no longer developed so I'm looking for a replacement. What made PhotoImpact great to me was the ability to work in both bitmap and vector modes. What I mean by that: I could have an image or screenshot and easily add arrows, text captions or shapes to it. These shapes were vector objects so I could come back to them later and amend their properties easily. Software I know of: Paint.NET is purely bitmap so please don't recommend it - layers are not enough for my needs Drawing tools in MS Office work pretty much the way I'd like - you can paste an image and then add vector objects on top of it. It just doesn't feel right to have the full-fidelity original images stored as .docx or .pptx (I don't fully trust Word/Powerpoint that they don't compress the image) I'm not sure about GIMP but if it's just "better Paint.NET" (i.e., layers but no vector objects) I'm not interested Photoshop is out of question purely because of its price tag Corel killed PhotoImpact because they already had a competing product (Paint Shop Pro) but AFAIK it lacks vector features. Any tips for PhotoImpact alternatives would be very welcome.

    Read the article

  • Using a Level 2 switch as a core switch

    - by imtech
    I have a small user base of about 20 people on at a time and spiking up to about 80 people during peak times. Most people (80+%) are connected over our Aruba managed wireless system. We have a Windows Domain. We have 3 24-Port switches all connecting back to a central 48-port switch where additional access ports, firewall, servers, and wireless controller all centrally connect back to. It's a flat network with dumb switches. I'm in the process of upgrading our infrastructure. Cisco pricing for switches is pretty high for us so I've been looking at HP Procurves which seem to be within our budget range. I want to eventually make use of 802.1x, SNMP, QoS for possible VOIP upgrades, VLAN to separate guest VLAN from authenticated users, and other more advanced features. PoE would be nice but that's probably too expensive for us. I was thinking of having our core switch be a Procurve 2610 and the rest of our switches that centrally connect to it be Procurve 2510s. A true and full blown level 3 switch is way out of our price range but a 2610 seems to be good enough for us. The 2610 does static routing which ought to be good enough for us but I'm in unfamiliar territory so I'm looking for any gotchas. Also, should all the switches be 2610s or just the core switch? Do I even need the 2610, can I just go with all 2510s? I'm new to VLANs as well so I'm not sure what it is I need but I would like an affordable infrastructure that won't need replacing 2-3 years down the line because I choose a product that was lacking.

    Read the article

  • Netgear FVS336G: appropriate solution for today's small businesses?

    - by bwerks
    Hey all, I've been looking into a routers to facilitate a vpn solution for a small business. While the Netgear FVS336G looks good on paper, it appears to have some fairly crippling setbacks that drag down what appears to be some great hardware. First off, the unit has been around for a couple years now, perhaps before 64-bit operating systems were as common as they are now, and complaints are everywhere that claim that SSL or IPsec (or both) VPN connections will not work with 64-bit operating systems. However, most of these claims mention only Vista, which makes me think that these problems could have potentially been solved since then. Unfortunately though, Netgear's support forums seem to be incredibly private, and policed by some troll named jmizuguchi who just closes down public posts in order to marshal them into the private ones. Danger, will robinson. Apparently their firmware upgrade process is a nightmare too, but that's beside the point. My question is this: has anyone configured one a Netgear FVS336G to operate in a server 2008 (or R2)/windows 7 64-bit network? If so, is it possible to use the microsoft vpn client or are third party clients still required? If this thing has just failed the test of time, is there a feature-comparable unit that I've missed, at anywhere near the same price range? Thanks!

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • Using my old PC as a web/file server?

    - by Garrett
    I have an old desktop computer that I've been trying to sell for AGES. I guess nobody is looking for computers because it was advertised at a dirt cheap price on craigslist, local papers, etc. Anyways, I was wondering if it would be worth it to set it up as a home file server, a web dev server (I have a web host for actual production use), and maybe host a few server applications (ex: ventrillo). The computer is actually an old Dell that I cannibalized after the motherboard being destroyed by lightning, so it has fairly new parts in it. The specs are: P4 3.4GHz w/ HT and Artic Cooling Freezer 7 3GB DDR2 533 RAM 80GB hdd (will upgrade the hard drive if it's even worth using as a server) basic dvd rom 430 Watt Thermaltake PSU (it might be important to note that it is only 60% efficiency) ATI Radeon x600 256MB Antec 300 case It's not a really beefy machine, I just can't see giving it away or putting it in the corner to just collect dust. I have Windows Server 2008 R2 Standard and I am confident in my skills in operating most Linux operating systems. I'd also be using it to tinker with when I learn new things in my server admin classes (I'm finishing my 2nd year in college at the moment so I'm still learning) Also, my house is quite old and the electrical wiring is pretty poor (it MIGHT be up to code, then again, where I live most people don't even know what regulations are or let alone know how to spell it...) Would it be safe to leave it running all day and is it going to run up my electric bill because of the PSU efficiency? I only have 5mbit cable internet, but I won't be running very bandwidth intense services on it so it should be ok. I should elaborate on why I am concerned about the power. The circuits should be fine, but I'm more concerned about fire hazard. What is the likelihood that the server could cause an electrical fire? Again, thank you all for the feedback!

    Read the article

  • MacBook Air i5 4 GB RAM shows screen tearing when scrolling in browsers

    - by Sandro Dzneladze
    I see screen tearing pretty often while scrolling webpages up and down, and it has been like this from day 1. But now I purchased an external monitor which is huge (23 inches compared to the Mac's 11), and the effect is more visible. It is driving me nuts and giving me headaches. I wonder if you see the same. I was reading a lot about this problem, and it seems to be present on MacBook Pros as well? Can someone confirm I'm not alone? In other words: I'm deciding weather to go through warranty repair, or if it makes no sense if all of theme exhibit same behavior. It is shame to see this beautiful machine with an insane price tag to be lagging when browsing web, when the processor is just sitting there idle at 4-12%! This is an example I found on Wikipedia that more or less describes what happens when I scroll up down. The effect is not so pronounced and clears when I stop scrolling, but it is sure annoying the hell out of me. Firefox is tearing like hell, Chrome less, Safari exhibits jagged scrolling but less tearing compared to Chrome and Firefox. With synthetic benchmarks I don't see any problems with hardware. But this is not particularly revealing.

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • iSCSI performance questions

    - by RyanLambert
    Hi everyone, apologies for the long-winded post in advance... Attempting to troubleshoot some iSCSI sluggishness on a brand new vSphere deployment (still in test). Layout is as such: 3 VSphere hosts, each with 2x 10GB NICs plugged into a pair of Nexus 5020s with a 10gig back-to-back between them. NICs are port-channeled in an active/active redundant fashion (using vPC-mac pinning for those of you familiar with N1KV) Both NICs carry service console, vmotion, iSCSI, and guest traffic. iSCSI is on a single subnet/single VLAN that is not routed through our IP network (strictly layer2) Had this been a 1gig deployment, we probably would have split the iSCSI traffic off onto separate NICs, but the price/port gets rather ridiculous when you start throwing 4+ NICs to a server in a 10gigabit infrastructure, and I'm not really convinced it's necessary. Open to dialogue/tech facts re: this, though. At this point even a single VM guest will boot slowly to iSCSI storage (EMC CX4 on the same Nexus 5020 10gig switches), and restores of VMs from iSCSI take about twice as long as we'd expect them to. Our server folks mentioned that if we split the iSCSI off onto its own NIC, performance seems significantly better. From a network perspective, I've run through the variables I can think of (port configuration errors, MTU problems, congestion etc.) and I'm coming up dry. There really is no other traffic on these hosts other than the very specific test being performed at the time. Important thing to note is that guest traffic works just fine... it seems storage is the only thing affected by whatever gremlin exists. Concluding that we're not 'overutilizing' the network infrastructure since we're doing hardly anything, I'm just looking for some helpful tips/ideas we can use to resolve this... preferably without hurling extra 10gig NICs that are going to sit around 10% utilization while we've got 70+% left on our others.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >