Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 464/664 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Printer on different network(IP range), can i print to it?

    - by John
    Heres my situation, client PC is on the same network as the printer to start with: 192.168.1.X Thats fine and printing is fine, however our clients are required to connect to the VPN using the installed cisco client - this creates a virtual adapter and now the PC is connected to a 10.0.0.X IP range and can no longer see the printer, thus my problem! If I do an IPCONFIG/ALL the PC still has the 192.168.1.X address but it I can't ping anything on it. Is it possible to use both IP ranges at the same time?

    Read the article

  • VLAN help on ESX 4/vSphere

    - by user49032
    I setup a new VLAN with ID 153 in vSphere for my ESX4 server. The VLAN is setup for virtual machines and then I added a new NIC to the VM I want to be able to access the VM. The NIC is added to VLAN 153, but yet I am unable to ping the VLAN .1 IP that is setup on our Cisco 3750. The IP is properly setup on the Cisco 3750 because I'm able to ping the interface IP from other machines on the network. I'm guessing there must be an issue with the cabling. Any ideas?

    Read the article

  • Hosting ESXI (free edition) [closed]

    - by Peter Adss
    We currently have one physical server running the free version of VMWare ESXi that virtualizes a Win SBS 2003 server and a Citrix server. We need to collocate the server and are investigating our options. Are there places that will host our virtual servers and save us the expense of shipping the physical server out for collocation. In my mind we'd copy the Vms to disk and ship them out. Does the fact that we're using the free version of ESXi create a barrier to this idea? Thanks for the help, I realize this is a stupid question.

    Read the article

  • My server hostname doesn't work? [on hold]

    - by xSpartanCx
    I've got a raspberry pi running raspbian server edition. It's a modified debian that runs well on the rPi. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work and I can't ssh (or find it anywhere on the network) using the hostname.

    Read the article

  • The volume "filesystem root" has only 0 bytes disk space remaining?

    - by radek
    I installed 11.10 ~two weeks ago and run into some strange troubles recently. Installation was on brand new laptop with clear 128GB SSD. I opted for encrypting home directory. Apart from that I accepted defaults during the installation. There is no other OS on my laptop. I had circa 40GB in use when (for the third time) I got to see this very unpleasant window: Twice situation was pretty bad and whole system slowed down considerably. After reboot I could not login to graphical interface (with an error message informing about insufficient space) and had to remove some files from command line first. Third time I still managed to quickly delete some files and it helped. My laptop is mainly work environment: so no torrents, games, just two movies. Only media filling space are ~20GB of pictures, and bunch of pdfs. Working mostly on PostgreSQL & PostGIS, GeoServer and QGIS recently. Although I had lots of opportunities to test and practice my backups I would be extremely grateful if somebody could point me to any potential solutions to this problem. My laptop has been bought just before I installed Ubuntu, and it came without OS. Could that be hardware issue? Or is the encrypted home causing me headaches? Thanks for help! Update: As suggested by @maniat1k, here is current output of fdisk -l: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 312581807 156290903+ ee GPT

    Read the article

  • SSH Public Key - No supported authentication methods available (server sent public key)

    - by F21
    I have a 12.10 server setup in a virtual machine with its network set to bridged (essentially will be seen as a computer connected to my switch). I installed opensshd via apt-get and was able to connect to the server using putty with my username and password. I then set about trying to get it to use public/private key authentication. I did the following: Generated the keys using PuttyGen. Moved the public key to /etc/ssh/myusername/authorized_keys (I am using encrypted home directories). Set up sshd_config like so: PubkeyAuthentication yes AuthorizedKeysFile /etc/ssh/%u/authorized_keys StrictModes no PasswordAuthentication no UsePAM yes When I connect using putty or WinSCP, I get an error saying No supported authentication methods available (server sent public key). If I run sshd in debug mode, I see: PAM: initializing for "username" PAM: setting PAM_RHOST to "192.168.1.7" PAM: setting PAM_TTY to "ssh" userauth-request for user username service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth[ Checking blacklist file /usr/share/ssh/blacklist.RSA-1023 Checking blacklist file /etc/ssh/blacklist.RSA-1023 temporarily_use_uid: 1000/1000 (e=0/0) trying public key file /etc/ssh/username/authorized_keys fd4 clearing O_NONBLOCK restore_uid: 0/0 Failed publickey for username from 192.168.1.7 port 14343 ssh2 Received disconnect from 192.168.1.7: 14: No supported authentication methods available [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup Why is this happening and how can I fix this?

    Read the article

  • Error while installing vmware tools v8.8.2 in Ubuntu 12.04 beta

    - by Dipen Patel
    I just upgraded to Ubuntu 12.04 from 11.10 using update manager. I use it as virtual machine on VMWare Player 4.xx. As usual I installed vmware tools to enable full screen mode and shared folder functionality. But while installing I got an error while building modules for shared folder and fast networking utilities for vmware tools. Error is ============================================== /tmp/vmware-root/modules/vmhgfs-only/fsutil.c: In function ‘HgfsChangeFileAttributes’: /tmp/vmware-root/modules/vmhgfs-only/fsutil.c:610:4: error: assignment of read-only member ‘i_nlink’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/fsutil.o] Error 1 make[2]: *** Waiting for unfinished jobs.... /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: initialization from incompatible pointer type [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: (near initialization for ‘HgfsFileFileOperations.fsync’) [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/tcp.c:53:30: error: expected ‘)’ before numeric constant /tmp/vmware-root/modules/vmhgfs-only/tcp.c:56:25: error: expected ‘)’ before ‘int’ /tmp/vmware-root/modules/vmhgfs-only/tcp.c:59:33: error: expected ‘)’ before ‘int’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/tcp.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-22-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' The filesystem driver (vmhgfs module) is used only for the shared folder feature. The rest of the software provided by VMware Tools is designed to work independently of this feature. Let me know if anyone has encountered and solved this problem. Regards, Dipen Patel

    Read the article

  • Sun Storage 2500-M2 Array and Sun Fire X4470 M2 Server

    - by nospam(at)example.com (Joerg Moellenkamp)
    There is some new hardware in the Oracle portfolio. The first one is the Sun Fire X4470 M2 Server. There was a lot of talk about the system before because of benchmark results, but now it's finally announced. Two or four Intel Xeon E7-4800. Up to 1 TB as the system provides 64 DIMM slots with 16 GB DDR DIMMs. The memory is placed on those riser cards right behind the fans of this chassis. Up to 6 internal drives. In a 3 RU package. Another announcement was the Sun Storage 2500 M2 announced yesterday: From 5 to 48 drives (the later number with three expansion trays) for up to 28.8 TB of storage. The array is SAS based internally. You can put 300GB and 600 GB in it. The 2540-M2 provides 4 (8 optional) FC ports with up to 8 GB/sec. The 2530-M2 has 4 SAS2 ports with up to 6 GBit/s. It has 2 integrated controllers providing 2 GB cache protected by a power backup for 72 hours. The controller enables the arrays to deliver 0, 1, 10, 3, 5, 6, (P+Q) RAID levels.

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • Ubuntu 12.04 on Amazon EC2: /dev/xvda1 will be checked for errors at next reboot?

    - by cwd
    I'm running the lastest Ubuntu 12.04 AMI (ami-a29943cb) from Canonical on Amazon EC2 and quite often when I log in I get the message: *** /dev/xvda1 will be checked for errors at next reboot *** I have read a bunch of documentation on this and seem to understand that every so many reboots (around 37 see Mount count / Maximum mount count below) Ubuntu wants to check a disk for errors. I can see that by using dumpe2fs -h /dev/xvda1 (reference) to get information such as: Last mounted on: / Filesystem UUID: 1ad27d06-4ecf-493d-bb19-4710c3caf924 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 524288 Block count: 2097152 Reserved block count: 104857 Free blocks: 1778055 Free inodes: 482659 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 511 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Apr 24 03:07:48 2012 Last mount time: Thu Nov 8 03:17:58 2012 Last write time: Tue Apr 24 03:08:52 2012 Mount count: 3 Maximum mount count: 37 Last checked: Tue Apr 24 03:07:48 2012 Check interval: 15552000 (6 months) Next check after: Sun Oct 21 03:07:48 2012 Lifetime writes: 2454 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 0a25e04c-6169-4d68-bfa6-a1acd8e39632 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x0000158b Journal start: 1 I've tried these things to get rid of the message and usually the badblocks is what does it for me: Run this command and reboot: sudo touch /forcefsck Run badblocks to check the disk: badblocks /dev/sda1 Edit /etc/fstab and change the last "0" which is the fs_passno column accordingly and then reboot: The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. I don't understand: If this is a virtual drive shouldn't it be less prone to errors? Was the image created with one of the flags set? If not what is triggering it? Why is fs_passno set to 0 on Amazon EC2 Ubuntu images? This is not the first one that is like this.

    Read the article

  • Consolidation Strategy References

    - by BuckWoody
    I have a presentation that I give on SQL Server Consolidation Strategies, and in that presentation I talk about a few links that are useful. Here are some that I’ve found – feel free to comment on more, or if these links go stale:   Consolidation using SQL Server: http://msdn.microsoft.com/en-us/library/ee692366.aspx SQL Server Consolidation Guidance:  http://msdn.microsoft.com/en-us/library/ee819082.aspx   More references for SQL Server and Hyper-V: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Virtualization-with-SQL-Server.aspx Quick overview of Virtual Server licensing implications: http://www.microsoft.com/uk/licensing/morethan250/learn/virtualisation.mspx SQL Server and Hyper-V best practices: http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best-practices-and-performance-recommendations.aspx High-Availability and Hyper-V: http://technet.microsoft.com/en-us/magazine/2008.10.higha.aspx Virtualization Calculator: http://www.microsoft.com/Windowsserver2008/en/us/hyperv-calculators.aspx   May not be current, but here’s a whitepaper from VMWare for SQL Server: http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf More information on SQL Server and VMWare: http://blogs.msdn.com/cindygross/archive/2009/10/23/considerations-for-installing-sql-server-on-vmware.aspx   Server Virtualization Validation Program: http://www.windowsservercatalog.com/svvp.aspx?svvppage=svvp.htm Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • How to Install Mac OS X Lion on Your HP ProBook (or Compatible Laptop)

    - by Usman
    There’s nothing more satisfying than building a hackintosh, i.e. installing Mac OS X on a non-Apple machine. Although it isn’t as easy as it sounds, but the end result is worth the effort. Building a PC with specific components and installing Mac OS X on it can save you thousands of dollars you might spend on a real Mac. And now, it’s time to step into the portable world. Today we will show how you can turn an HP ProBook (or any compatible Sandy Bridge laptop) into a 95% MacBook Pro! Why should (or shouldn’t) you do it? Let’s clarify whether or not it should be done. Firstly, we all know Apple makes awesome laptops. The design, build quality, and the aesthetics (not to mention, the glowing Apple) would make you crave for one. Secondly, all these Apple laptops are bundled with Mac OS X, which (for some people) is the most user-friendly and annoyance-free operating system. Digital artists, musicians, video editors, they all prefer Mac for a reason. So the verdict is, if hardware design is what you really look for, you should get a real Mac, and we are not at all stopping you from doing so. But if you’re only concerned with the OS (and saving a few bucks in your pocket), you may consider giving this a shot. But remember, it may not perform as good as a real Mac does. The results vary, so hope for the best, and proceed with caution. Why HP ProBook? How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Ubuntu 13.10 install ISO crashes on VirtualBox Mac 4.3

    - by John Allsup
    Does anybody know what to do about this? Machine is a 2008 Core 2 Duo iMac with 4GB RAM. (And 64bit Debian 7 boots OK, but I've not tried installing under the latest version of VirtualBox as I have just upgraded VBox today.) VirtualBox 4.3, upon trying to boot a machine with the Ubuntu 13.10 (64bit) iso (with the VM configured for Ubuntu 64bit) crashes, with the following information: Failed to open a session for the virtual machine Ubuntu64. The VM session was aborted. Result Code: NS_ERROR_FAILURE (0x80004005) Component: SessionMachine Interface: ISession {12f4dcdb-12b2-4ec1-b7cd-ddd9f6c5bf4d} === Head of crash dump is below Process: VirtualBoxVM [716] Path: /Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM Identifier: VirtualBoxVM Version: ??? (???) Code Type: X86 (Native) Parent Process: VBoxSVC [644] Date/Time: 2013-10-17 22:58:23.679 +0100 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000040 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x92a25c03 CFSetApplyFunction + 83 1 com.apple.framework.IOKit 0x95557ad4 __IOHIDManagerInitialEnumCallback + 69 2 com.apple.CoreFoundation 0x92a2442b __CFRunLoopDoSources0 + 1563 3 com.apple.CoreFoundation 0x92a21eef __CFRunLoopRun + 1071 4 com.apple.CoreFoundation 0x92a213c4 CFRunLoopRunSpecific + 452 5 com.apple.CoreFoundation 0x92a211f1 CFRunLoopRunInMode + 97 6 com.apple.HIToolbox 0x98eb5e04 RunCurrentEventLoopInMode + 392 7 com.apple.HIToolbox 0x98eb5af5 ReceiveNextEventCommon + 158 8 com.apple.HIToolbox 0x98eb5a3e BlockUntilNextEventMatchingListInMode + 81 9 com.apple.AppKit 0x9971b595 _DPSNextEvent + 847 10 com.apple.AppKit 0x9971add6 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 156 11 com.apple.AppKit 0x996dd1f3 -[NSApplication run] + 821 12 QtGuiVBox 0x019f19e1 QEventDispatcherMac::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 1505 13 QtCoreVBox 0x018083b1 QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 65 14 QtCoreVBox 0x018086fa QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 170 15 QtGuiVBox 0x01eea9e5 QDialog::exec() + 261 16 VirtualBox.dylib 0x011234b8 TrustedMain + 1108104 17 VirtualBox.dylib 0x01126d68 TrustedMain + 1122616 18 VirtualBox.dylib 0x010fac19 TrustedMain + 942057 19 VirtualBox.dylib 0x010f9b3d TrustedMain + 937741 20 VirtualBox.dylib 0x010f81dd TrustedMain + 931245 21 VirtualBox.dylib 0x010f85b8 TrustedMain + 932232 22 VirtualBox.dylib 0x0109d4f8 TrustedMain + 559304 23 VirtualBox.dylib 0x0101521e TrustedMain + 1518 24 ...virtualbox.app.VirtualBoxVM 0x00002e7e start + 2766 25 ...virtualbox.app.VirtualBoxVM 0x000024b5 start + 261 26 ...virtualbox.app.VirtualBoxVM 0x000023e5 start + 53 ==== And please somebody fix the code so that you can just delimit large blocks of code at the start and the end without indenting every line manually by 4 spaces.

    Read the article

  • Wired PS/2 Keyboard and Mouse do not work in 12.04 Live CD or after fresh 12.04 install - Foxconn D270S Atom Motherboard

    - by david krajewski
    My Wired PS/2 keyboard and mouse do not work in 12.04, either in a fresh install or from the Live CD. A USB keyboard and mouse will work. The PS/2 keyboard and mouse will work using the same hardware and a fresh 10.04 Ubuntu install or a fresh Windows 7 install. The motherboard is a Foxconn D270S Atom based motherboard. This problem is specific to this motherboard and Ubuntu 12.04. So far I've tried running in the fresh 12.04 install: sudo apt-get install sudo apt-get upgrade sudo apt-get dist-upgrade After the apt-get upgraded the first time, I now get to the point where there are 0 items to be upgraded. Rebooted and still no PS/2 keyboard/mouse. I've also tried adding the following lines to GRUB at boot time (the PS/2 keyboard works in GRUB) acpi=noirq acpi=off Neither setting made any difference. The motherboard is a Foxconn D270S Atom based motherboard. Even more interestingly, 12.04 recognizes the same PS/2 keyboard and mouse on a separate Gigabyte AM3+ based motherboard. I only have this problem on this particular motherboard with Ubuntu 12.04. Any help would be appreciated. I bought this low-power motherboard specifically for the task of running Ubuntu 12.04 for the next five years... Update... I dug out an old PS/2 mouse that does not use a PS/2 to USB adapter but is directly wired for PS/2. Still no PS/2 keyboard or mouse after reboot. Again, I only have this problem with this motherboard and 12.04 Ubuntu. Other motherboards work fine with 12.04 and this motherboard works fine with 10.04. Update 2... I installed the 12.04 Server version. The text-based installer recognized the keyboard without issue, but after the first boot into the installed OS, the PS/2 keyboard would no longer respond. I tried installing Gnome on the Server install and the PS/2 mouse and keyboard don't work in Gnome either. I Opened bug #995570 for this issue

    Read the article

  • VirtualBox host-only networking fails on Ubuntu 11.10 host

    - by Jeremy Kendall
    I've installed Ubuntu 11.10 on a new Lenovo Thinkpad 420s and I'm trying to get some VirtualBox VMs up and running (using Vagrant). Everything works fine until I try to add host-only networking. This is the failure I get in the terminal: I set logging to debug and tried again. Here's a paste of the relevant portion of the log. When I try to add host-only networks with the VirtualBox gui (File-Preferences-Networking-Add host-only networking), I get the following error message: This error is occurring with three different virtual boxes, all Ubuntu 11.10 64bit guests, one of which I've run without issue on a Windows host and an OSX host. Here is the Vagrantfile for the box I've successfully run on Windows and OSX: Vagrant::Config.run do |config| config.vm.box = "ubuntu-11.10" config.vm.box_url = "http://timhuegdon.com/vagrant-boxes/ubuntu-11.10.box" config.vm.network :hostonly, "192.168.33.10" config.vm.customize ["modifyvm", :id, "--memory", "512"] config.vm.customize ["modifyvm", :id, "--natnet1", "10.0.28.0/24"] config.vm.forward_port 80, 4567 end I've tried two other boxes as well, one of which I built last night with veewee, all of which are getting the exact same error. I've used rvm to install ruby 1.9.3-p125 [ x86_64 ], and I've got Vagrant running in its own gemset. I've Googled quite a lot and haven't been able to find any resolution. Suggestions?

    Read the article

  • Free cloud web service development

    - by hyde
    I am looking for a free (as in beer) combination of services, for learning "cloud SW development" and very small scale private use (say, a private streamlined web shopping&todo list with simple auth). The combination should include the full set of needed services: DVCS service (like github) A cloud service to run the backend code A suitable data storage service (preferably not SQL), accessed by the backend (if not included in the backend service) A web service, serving the web pages seen by user, to access the backend functionality A "cloud IDE" (ideally one, two is ok too) for both backend and HTML/javascript coding If (backend) deployment uses some CI, then that Other points: Backend programming language can be anything, except VB or PHP Everything has to be in the cloud, nothing permanent on a local PC (graphics is not part of the question) Looking for ready-to-use service combination, not a virtual server where I can set anything up myself I don't care if service insists on displaying ads in the user web UI "Cheap" and "free trial" are ok too, if "free" does not exist As per example use case, storage, CPU and bandwidth quota requirements are negligible Google finds several services of course, all requiring at least registration before testing, so I'm looking for a known-good combination, so ideal answer starts with "I use this service combo: ...", contains links to services and brief description and personal experiences.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • MySQL at Mobile World Congress (on Valentine's Day...)

    - by mat.keep(at)oracle.com
    It is that time of year again when the mobile communications industry converges on Barcelona for what many regard as the premier telecommunications show of the year.Starting on February 14th, what better way for a Brit like me to spend Valentines Day with 50,000 mobile industry leaders (my wife doesn't tend to read this blog, so I'm reasonably safe with that statement).As ever, Oracle has an extensive presence at the show, and part of that presence this year includes MySQL.We will be running a live demonstration of the MySQL Cluster database on Booth 7C18 in the App Planet.The demonstration will show how the MySQL Cluster Connector for Java is implemented to provide native connectivity to the carrier grade MySQL Cluster database from Java ME clients via Java SE virtual machines and Java EE servers.  The demonstration will show how end-to-end Java services remain continuously available during both catastrophic failures and scheduled maintenance activities.The MySQL Cluster Connector for Java provides both a native Java API and JPA plug-in that directly maps Java objects to relational tables stored in the MySQL Cluster database, without the overhead and complexity of having to transform objects to JDBC, and then SQL  The result is 10x higher throughput, and a simpler development model for Java engineers.Stop by the stand for a demonstration, and an opportunity to speak with the MySQL telecoms team who will share experiences on how MySQL is being used to bring the innovation of the web to the carrier network.Of course, if you can't make it to Barcelona, you can still learn more about the MySQL Cluster Connector for Java from this whitepaper and are free to download it as part of MySQL Cluster Community Edition  Let us know via the comments if you have Java applications that you think will benefit from the MySQL Cluster Connector for JavaI can't promise that Valentines Day at MWC will be the time you fall in love with MySQL Cluster...but I'm confident you will at least develop a healthy respect for it  

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Sun Oracle Database Machine a román Banca Transilvaniánál

    - by Fekete Zoltán
    Oracle sajtóhír: Banca Transilvania, first institution in Romania to use Sun Oracle Database Machine (English version) Sikersztori, ügyféltörténet pdf-ben. Az Database Machine V2 megjelenését 2009 szeptemberben jelentette az Oracle. A világon az elso bank, ahol már élesben muködik a Database Machine V2, a romániai Banca Transilvania! Olvassa el a sajtóhírt. A Banca Transilvania 1,5 milló ügyféllel rendelkezik. "This system, product of Oracle and Sun, is the fastest server in the world for data storage, online transactions processing and data warehousing applications." Robert C. Rekkers, Banca Transilvania CEO, ezt nyilatkozta:"Business information is accessed 30 times faster using the new system, leading to quicker decisions and a better data base segmentation", azaz a Database Machine segítségével az üzleti kérséseket 30-szor gyorsabban tudják megválaszolni, mint a korábbi rendszerrel. Leontin Toderici, Banca Transilvania COO mondta a következot: "The acquisition price was excellent, as the costs were below those of an ordinary system", azaz a rendszer ára kiváló volt, kisebb volt a kötsége, mint a hagyományos rendszereké. Sorin Mindrutescu, az Oracle Romania vezetoje büszke arra, hogy egy romániai cég is az innovatív rendszer felhasználói között lehet.: "Oracle Exadata V2 is the result of over 30 years of experience in hardware and software development of two leader companies. I am glad that a top Romanian company is amongst the first in the world to use this innovative product." Az Exadata termékcsalád és a Database Machine kiváló eszköz OLTP rendszerek, adattárházak, konszolidációs megoldások adatbázisainak futtatására. Egy csomagban a tartalmazza a szoftvert és az "okos" hardvert, az adatfeldoldozó, a tároló (storage) komponenseket, mindezt az extrém gyors Infiniband kapcsolatokkal összekötve. A Banca Transilvani az Oracle readingi (Nagy-Britannia) központjában tesztelte a Database Machine rendszert, s a korábbi rendszernél tízszer, néhol hetvenkettoször gyorsabb teljesítményt kaptak, 10-72-szeres teljesítménynövekedés!, említette Tudor Iliescu, Trend Import - Export CEO. A központi Oracle sajtóhír: Customers Select Oracle® Exadata for Extreme Performance of Data Warehouse and OLTP Applications

    Read the article

  • No-Weld Multi-Monitor Stand Crafted From Sturdy Metal Framing

    - by Jason Fitzpatrick
    As far as DIY stands for multiple monitors go, this design has to be the sturdiest and least difficult to construct model we’ve seen in some time. Read on to see how one DIYer cleverly crafted a solid metal triple monitor stand with no welding involved. Tinker and gamer Opteced wanted a new stand for his Eyefinity setup but wasn’t in a hurry to spend a pile of cash on a custom stand. His DIY solution is just as sturdy as a commercial metal stand but is made out of inexpensive hardware store parts–the main supports and base are made from Unistrut, a simple metal framing material. Unlike many DIY stands made from metal rods and piping, this build doesn’t require any sort of welding or custom pipe threading. In fact, the metal struts are so over engineered for the task of holding up flat-panel monitors he was able to simply partially saw through them and bend them to the shape he wanted. Hit up the link below for additional pictures of the build. Unistrut Monitor Stand [via Hack A Day] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Letter to Ballmer: Making Better Consumer Devices

    - by andrewbrust
    Last year, I wrote Steve Ballmer an email, and he was kind enough to write me back.  The email contained a scan of a column I wrote praising Microsoft’s BI strategy.  His reply contained three simple words: “Super nice  thanks.” Well, now I’d like to write to Steve again, in an open letter format, and this time the love may be a bit tougher.  But I’m still super earnest. The past two days have been eventful ones for Microsoft: The company announced the departure of company veterans Robbie Bach and J Allard and the market announced Apple is now besting Microsoft in market capitalization. Plus, announcements were made that make it plain that Ballmer will, in effect, be running Microsoft’s Entertainment & Devices division himself. With that in mind, I’d like to offer my list of a dozen things I think Microsoft’s CEO should do to improve that division’s offerings and, hopefully, its bottom line. So here goes:   1. On Windows Phone 7, Stay the Course The press is teeming with headlines and reader comments proclaiming the death-before-arrival of Windows Phone 7.  That’s plain silly.  You’ve got the makings of a great and unique SmartPhone platform, and you’re the only company (even considering RIM) that can offer full fidelity Exchange integration, not to mention implementing Office on the device.  Let the existing team finish this puppy and ship it. And then have them pump out a few updates, over-the-air, quickly.  Show them that Google Android’s not the only product that can do good, rapid dot releases. And another thing: make sure your OEMs’ devices have flawless touch screens.  If they don’t, then you shouldn’t certify them for delivery to customers.  Period. Oh, and kill the Kin, quietly.  It was DOA, and you know it.   2. Move Media Center to the Xbox Platform Media Center is, at its core, a good product.  But delivering a media distribution and DVR platform on a sophisticated PC operating system like Windows 7 just creates too many moving parts.  Xbox already functions as the best Media Center extender device – it should actually be the hub as well. Media Center is mostly based on .NET code – and XNA is a .NET environment for Xbox – find a way to bridge that small gap and make Media Center a joy to work with instead of a frustration.  Beating Apple TV out of this sub-market is the lowest hanging fruit on the tree (goofy pun, but it’s true).   3. Integrate Media Center with Mediaroom, or Kill the Latter You have two media products with almost identical names.  One is for standalone DVRs and the other is for IPTV cable set tops with DVR capabilities.  Can we merge these please?  My previous request of putting Media Center on Xbox would seem to tie into this nicely, since you’ve announced plans to do that with Mediaroom already.   4. Fix the Red Ring of Death People love the Xbox, but they really don’t love sending their consoles back every 18-24 months, when they get a bunch of red lights flashing on power up.  You’ve handled this defect about as gracefully as possible, but it’s been around for a long time now and it doesn’t seem to be fixed yet.  You can do better.  In fact, you must do better, or you insult your customers.   5. Add Blu Ray to Xbox I know, streaming movies are the future; physical media is legacy technology.  So if that’s true, why did you back HD DVD so hard?  You know why: for now, the film studios won’t allow a large selection of new release, HD, surround sound content be distributed on any medium other than Blu Ray or cable pay per view/on-demand.  Don’t you want home theater buffs to see the Xbox as a fantastic device for their rigs?  Don’t you want to put PlayStation 3 out of its misery?  And if you follow my suggestions above (move Media Center to the Xbox and fix the Red Ring problem), you’d have it all sewn up.  Do I think Blu Ray functionality will move a lot of units?  No.  Do I think that it would move more units with desperately needed influential home theater consumers?  You bet.  And you might sell more ZunePass subscriptions in the process. But while you’re at it, make the fan quieter, please.   6. Make More of Windows Home Server Home Server is a fantastic product.  And for reasons unknown to me, it seems like you’re letting it languish.  Development of the add-in ecosystem seems underfunded.  WHS’ unparalleled ease of use and reliability for home PC backup (and emergency restores) goes unsung.  Product cycles are slow.  Support for your OEMs, who are doing great work, especially in the green space with Atom CPUs, seems lacking.  You’ve married a trophy girl and you keep her cloistered at home!  That’s cruel, unusual and, um, incredibly ill-advised.  Make use of this ace card, and while you’re at it, give it real integration with Media Center.  The integration thus far proof-of-concept quality.  You should go way past that – both products will benefit immeasurably.   7. Set Up a Partner Platform for Custom Installers There’s a whole sub-industry of companies that install, integrate and configure home theater, security and connected home products.  They have an industry group. They are influential in the high-end of the consumer electronics industry, and so are their customers.  They love Media Center and they love Windows Home Server.  But I have talked to several of them at the Consumer Electronics Show and they tell me you don’t love them.  They find it very difficult to do business with Microsoft, even though they want nothing more than to sell and evangelize your platform.  This is a travesty.  Please fix it.  Get Allison Watson and the Microsoft Partner Network on board and have her hire someone who knows how to run a channel program for consumer electronics companies.  Problem solved.  Markets expanded.   8. Make Your Own Hardware In other areas, I know you love your partners.  I help run one, so I appreciate that.  But when it came to Xbox and Zune you built them it yourself (albeit on a contract basis, which is fine).  Windows Phone 7 has a chance to work as an OEM play, but it would work better if you produced the devices.  At least consider building a reference device that sells alongside your OEMs’ offerings.  That’s what Google did with the Nexxus One.  And while that phone was not itself a big seller, it catalyzed two wonderful things : (1) a quality bar was set and (2) partners exceeded it.  Before the Nexxus One, the best Android handset out there was the Motorola Droid. The Nexxus One was better, and the HTC Droid Incredible and Evo 4G are now even better than Google’s phone, which is why Verizon and Sprint decided not to carry it.  Imagine if all Windows Phone 6.x devices were on par with the HTC HD2.  I tend to believe you’d have a lot bigger market share than you do now.   9. Continue with Your Retail Initiative From what I hear, it sounds like it’s going well.  And this goes right along with making your own hardware.  When you build it, they will come.  And then it makes the likes of Best Buy and Staples do better.   10. Make an Acquisition (or Two) TiVo and/or Moxi look ripe for the picking.  With their ability to build stuff people love and your ability to run a business, you might just have something.  But do a better job than you did when you bought Danger.  Buy the ideas, not just the customers, eh?   11. Make Beautiful Stuff You’ve heard this one before, I know.  But I have some head-shrinking advice on this one.  You know that Apple obsesses over its industrial design.  You know that appeals to consumers.  But it seems you think doing so is Apple’s game exclusively and so you shouldn’t even try.  Bull dinky.  Come to New York and visit the Museum of Modern Art’s Architecture and Design gallery.  You’ll see that lots of companies and product categories have had very high design value well before Apple existed.  You can do this, and the Zune HD was a great start.  Now run with that.  Find those negative voices in your head that are telling you that you can’t and shut them up.  For good.   12. Burst the Bubble Some of the products you’ve built seem like they were conceived in a bizarro world.  That would appear to be the result of groupthink.  You must do better.  And there’s lots of people willing to advise you.  This includes just about everyone in the Regional Director program, and probably a bunch of MVPs.  Heck, I bet the guys at Engadget could help out too.  Imagine if you let them see the Kin before it shipped.  Talk to high-end gear consumers.  Talk to Best Buy and CostCo customers too.   Signing Off I hope this was of value to you.  As I wrote this I kept telling myself how obvious, even trite, some of these pieces of advice were and then, because of that, doubting they’d really help.  But I decided that they must not be obvious to Microsoft.  Sometimes when you get wrapped up in stuff, it’s hard to clear your head.  I think my head’s pretty clear here though (I’m wrapped up in other stuff), so maybe my perspective can help.  If not, well, then, I guess they all can’t be super nice.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >