Search Results

Search found 343 results on 14 pages for 'cold hawaiian'.

Page 9/14 | < Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Modelling multiple simultaneous states

    - by James P.
    How can you go about modelling an object that can have multiple simultaneous states? For example, you could have a person that's waiting for a bus. That's one state. But they could also be reading a newspaper while waiting for the bus. Furthermore, they could be thinking about something while reading the newspaper. They could also be sniffing their nose because they have a cold. That's a four states in all taking place at the same time. Obviously using booleans would be tedious and unflexible. Also, a conventional state pattern would mean that states are exclusive and can't be simultaneous in nature. The only thing I can think of is a State pattern combined with a Composite. Would this do or is there a way of taking things further?

    Read the article

  • List of fundamental data structures - what am I missing?

    - by jboxer
    I've been studying my fundamental data structures a bunch recently, trying to make sure I've got them down cold. By "fundamental", I mean the real basic ones. Fancy ones like Red-Black Trees and Bloom Filters are clearly worth knowing, but they're usually either enhancements of fundamental ones (Red-Black Trees are binary search trees with special properties to keep them balanced) or they're only useful in very specific situations (Bloom Filters). So far, I'm "fluent" in the following data structures: Arrays Linked Lists Stacks/Queues Binary Search Trees Heaps/Priority Queues Hash Tables However, I feel like I'm missing something. Are there any fundamental ones that I'm forgetting about? EDIT: Added these after posting the question Strings (suggested by catchmeifyoutry) Sets (suggested by Peter) Graphs (suggested by Nick D and aJ) B-Trees (Suggested by tloach) I'm a little on-the-fence about whether these are too fancy or not, but I think they're different enough from the fundamental structures (and important enough) to be worth studying as fundamental.

    Read the article

  • Wordpress Custom Themes > How to script Categories, Tags and Excerpt options to appear on "page" edi

    - by Scott B
    I'm using custom WordPress categories for some probably unintended functions, I will admit (things like marking a post no/index just by adding it to my custom "no-index" category, etc). But it sucks that WordPress does not enable pages with all the little extras you get with posts. For example, while the post editor gives you convenient access to Tags, Categories, and a Post excerpt field, pages get left out in the cold. I'd like to know if its possible to add the These items to the Post editor via add_action() directive from a custom theme.

    Read the article

  • Can you specify if aspnet_compiler.exe creates a debug or release build?

    - by user169867
    I wish to compile my asp.net MVC application using aspnet_compiler.exe from the comandline to speed up cold startup. I'm wondering how it determines if it should do a release or debug build. Is it always release? Does it depend on what the web.config file says when you run aspnet_compiler.exe? What happens to an application that's been compiled w/ aspnet_compiler.exe if someone changed the bug attribute in the web.config file after it has been published? Any clarification on this would be greatly appreciated.

    Read the article

  • RegEx (or other) parsing of script

    - by jpmyob
    RegEx is powerful - it is tru but I have a little - query for you I want to parse out the FUNCTIONS from some old code in JS...however - I am RegEx handicapped (mentally deficient in grasping the subtleties).. the issue that makes me NOT EVEN TRY - is two fold - 1) myVar = function(x){ yadda yadda } AND function myVar(x) { yadda yadda } are found throuout - COLD I write a parser for each? sure - but that seems inefficient... 2) MANY things may reside INSIDE the {} including OTHER sets of {} or other Functions(){} block of text... HELP - does anyone have, or know of some code parsing snippets or examples that will parse out the info I want to collect? Thanks

    Read the article

  • Representing a very large array of bits in little memory

    - by user614624
    Hello, I would like to represent a structure containing 250 M states(1 bit each) somehow into as less memory as possible (100 k maximum). The operations on it are set/get. I cold not say that it's dense or sparse, it may vary. The language I want to use is C. I looked at other threads here to find something suitable also. A probabilistic structure like Bloom filter for example would not fit because of the possible false answers. Any suggestions please?

    Read the article

  • Replacing the Import Table in PE file by standart LoadLibrary...

    - by user308368
    Hello. I have an executable (PE) file that load a dll file as represented in the Import table... let say: PEFile.exe Modules.dll my question is how can i remove Modules.dll's import_descriptor from the imports and do its work by loadLibrary without the rely on the import table and without destroy the file???... My bigger problem his i could not understand exactly how the Import thing works... after the loader read the information he needs to do the import's thing, i believe he use the LoadLibrary, GetProcAddress APIs... but i couldn't understated what he doing with the pointers he get... he putting them somewhere in memory... and then what just call them?!? all the papers i found in the net explain the structure of the import table, but i didn't found a paper that explain how it is really work and get used... i hope you cold understand my Gibberish English... Thank you!

    Read the article

  • Reaplaceing the Import Table in PE file by standart LoadLibrary...

    - by user308368
    Hello. I have an executable (PE) file that load a dll file as represented in the Import table... let say: PEFile.exe Modules.dll my question is how can i remove Modules.dll's import_descriptor from the imports and do its work by loadLibrary without the rely on the import table and without destroy the file???... My bigger problem his i could not understand exactly how the Import thing works... after the loader read the information he needs to do the import's thing, i believe he use the LoadLibrary, GetProcAddress APIs... but i couldn't understated what he doing with the pointers he get... he putting them somewhere in memory... and then what just call them?!? all the papers i found in the net explain the structure of the import table, but i didn't found a paper that explain how it is really work and get used... i hope you cold understand my Gibberish English... Thank you!

    Read the article

  • Forcibly clear memory in java

    - by MBennett
    I am writing an application in java that I care about being secure. After encrypting a byte array, I want to forcibly remove from memory anything potentially dangerous such as the key used. In the following snippet key is a byte[], as is data. SecretKeySpec secretKeySpec = new SecretKeySpec(key, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec); byte[] encData = cipher.doFinal(data, 0, data.length); Arrays.fill(key, (byte)0); As far as I understand, the last line above overwrites the key with 0s so that it no longer contains any dangerous data, but I can't find a way to overwrite or evict secretKeySpec or cipher similarly. Is there any way to forcibly overwrite the memory held by secretKeySpec and cipher, so that if someone were to be able to view the current memory state (say, via a cold boot attack), they would not get access to this information?

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • Silverlight User Group of Switzerland (SLUGS)

    - by Laurent Bugnion
    Last Thursday, the Silverlight Firestarter event took place in Redmond, and was streamed live to a large audience worldwide (around 20’000 people). Approximately 30 if them were in Wallisellen near Zurich, in Microsoft Switzerland’s offices. This was not only a great occasion to learn more about the future of Silverlight and to see great demos, but also it was the very first meeting of the Silverlight User Group of Switzerland (SLUGS). Having 30 people for a first meeting was a great success, especially if we consider that it was REALLY cold that night, that it had snowed 20 cm the night before! We all had a good time, and 3 lucky winners went back home with a prize: One LG Optimus 7 Windows Phone and two copies of Silverlight 4 Unleashed. Congratulations to the winners! After the keynote (which went in a whirlwind, shortest 90 minutes ever!), we all had pizza and beverages generously sponsored by the Swiss DPE team, of which not less than 5 guys came to the event! Thanks to Stefano, Ronnie, Sascha, Big Mike and Ken for attending! We decided to have meetings every month. Stay tuned for announcements on when and where the events will take place. We are also in the process of creating various groups online where the attendees can find more information. For instance, I created a group on Flickr where the pictures taken at events will be published. The group is public, and the pictures of the first event are already online! We also have the already known page at http://www.slugs.ch/, check it out. A national group Even though the first event was in Zurich, and that 3 of the founding members live nearby, we would like to try and be a national group. That means having events sometimes in other parts of Switzerland, collaborating with other local user groups, etc. Stay tuned for more Join! We want you, we need you If you are doing Silverlight, for a living or as a hobby, if you are interested in user experience, XAML, Expression Blend and many more topics, you should consider joining! This is a great occasion to exchange experiences, to learn from Silverlight experts, to hear sessions about various topics related to Silverlight, etc. If you want to talk about a topic that is of interest to you, If you want to propose a topic of discussion Or if you just want to hang out then go to http://www.slugs.ch and register! Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle

    - by user769227
    Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle by Olivia O'Connell Last week, we were treated to a call with Mary LoVerde, a renowned Life-Balance and Motivational Speaker. This was one of many events organized by Oracle Women’s Leadership (OWL). Mary made some major changes to her life when she decided to free herself of material positions and take each day as it came. Her life balance strategies have led her from working with NASA to appearing on Oprah. Mary’s MO is “cold turkey is better than dead duck!”, in other words, knowing when to quit. It is a surprising concept that flies in the face of the “winners don’t quit” notion and focuses on how we limit our capabilities and satisfaction levels by doing something that we don’t feel passionately about. Her arguments about quitting were based on the conception that ‘“it” is in the way of you getting what you really want’ and that ‘quitting makes things easier in the long run’. Of course, it is often difficult to quit, and though we know that things would be better if we did quit certain negative things in our lives, we are often ashamed to do so. A second topic centred on the perception of Confusion Endurance. Confusion Endurance is based around the idea that it is often good to not know exactly what you are doing and that it is okay to admit you don’t know something when others ask you; essentially, that humility can be a good thing. This concept was supposed to have to Leonardo Da Vinci, because he apparently found liberation in not knowing. Mary says, this allows us to “thrive in the tension of not knowing to unleash our creative potential” An anecdote about an interviewee at NASA was used to portray how admitting you don’t know can be a positive thing. When NASA asked the candidate a question with no obvious answer and he replied “I don’t know”, the candidate thought he had failed the interview; actually, the interviewers were impressed with his ability to admit he did not know. If the interviewee had guessed the answer in a real-life situation, it could have cost the lives of fellow astronauts. The highlight of the webinar for me? Mary told how she had a conversation with Capt. Chesley B. "Sully" Sullenberger who recalled the US Airways Flight 1549 / Miracle on the Hudson incident. After making its descent and finally coming to rest in the Hudson after falling 3,060 feet in 90 seconds, Sully and his co-pilot both turned to each other and said “well...that wasn’t as bad as we thought”. Confusion Endurance at its finest! Her discussion certainly gave food for thought, although personally, I was inclined to take some of it with a pinch of salt. Mary Lo Verde is the author of The Invitation, and you can visit her website and view her other publications at www.maryloverde.com. For details on the Professional Business Women of California visit: http://www.pbwc.org/

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Crash Report in Ubuntu... hardware problem?

    - by Andrew
    Got this on my machine. I was just browsing the web on Chrome and my computer froze. I recently just built this machine. I have a feeling it is a hardware problem... Possibly one of my parts arrived broken in some way.... Starting anac(h)ronistic cron Stopping anac(h)ronistic cron Stopping cold plug devices Stopping log initial device creation Starting enable remaining boot-time encrypted block devices Starting configure network device security Starting configure virtual network devices Starting save udev log and update rules Stopping configure virtual network devices Stopping save udev log and update rules Checking battery state... Stopping System V runlevel compatibility Stopping enable remaining boot-time encrypted block devices Stopping Mount filesystems on boot 91.573384] BUG: unable to handle kernel NULL pointer dereference at (null) 91.573437] IP: [<ffffffff81313514>] strcmp+0x14/0x30 91.573470] PGD 1f7822067 PUD 1ed7a6067 PMD 0 91.573498] Oops: 0000 [#1] SMP 91.573519] CPU 3 91.573531] Modules linked in: dm_crypt bnep snd_hda_codec_realtek rfcomm bluetooth parport_pc ppdev arc4 fglrx(P) rt2800usb rt2800lib crc_ccitt rt2x00usb rt2x00lib mac0021 cfg80211 psmouse snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer send_seq_device snd joydev mac_hid mei(C) soundcore serio_raw snd_page_alloc lp parport ses enclosure usbhid hid i915 drm_kms_helper drm i2c_algo_bit mxm_umi tg_video wmi usb_storage 91.573826] 91.573837] Pid: 2297, comm: update-notifier Tainted: P C O 3.2.0-29-generic #46-Ubuntu To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Extreme4 91.573912] RIP: 0010:[<ffffffff81313514>] [<ffffffff81313514>] strcmp+0x14/0x30 91.573954] RSP: 0018:ffff8801f83f5bb8 EFLAGS: 00010246 91.573982] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 91.574019] RDX: 0000000000000069 RSI: 0000000000000000 RDI: ffff88021adb26f8 91.574056] RBP: ffff8801f83f5bb8 R08: ffff88022f2d6e80 R09: 0000000000000000 91.574093] R10: ffff88021e7dbf00 R11: 0000000000000003 R12: ffff88021c10eb40 91.574130] R13: 0000000000000000 R14: ffff88021adb26f8 R15: ffff8801f83f5d40 91.574168] FS: 00007f958cf53940(0000) GS:ffff88022f2c0000(0000) kn1GS:0000000000000000 91.574210] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 91.574240] CR2: 0000000000000000 CR3: 000000021f6d7000 CR4: 00000000000406e0 91.574277] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 91.574314] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000 91.574351] Process update-notifier (pid: 2297, threadinfo ffff801f83f4000, task ffff880208fe2e00) 91.574397] Stack: 91.574409] ffff8801f83f5be8 ffffffff811ed509 ffff88021adb26c0 ffff88021b8b7020 91.574453] ffff88021b461c60 fffffffffffffffe ffff8801f83f5c18 ffffffff811ed61f 91.574496] ffff88021adb26c0 ffff88021b8b7020 ffff8801f83f5dc8 0000000000000001 91.574539] Call Trace: 91.574558] [<ffffffff811ed509] sysfs_find_dirent+0x59/0x110 91.574591] [<ffffffff811ed61f] sysfs_lookup+0x5f/0x110 91.574621] [<ffffffff81182745] d_alloc_and_lookup+0x45/0x90 91.574654] [<ffffffff8118fe65] ? d_lookup+0x35/0x60 91.574683] [<ffffffff811848d2] do_lookup+0x202/0x310 91.574712] [<ffffffff8118660c] path_lookupat+0x11c/0x750 91.574744] [<ffffffff81318db7] ? __strncpy_from_user+0x27/0x60 91.574778] [<ffffffff81186c71] do_path_lookup+0x31/0xc0 91.574809] [<ffffffff81187779] user_path_at_empty+0x59/0xa0 91.574842] [<ffffffff81187822] ? do_filp_open+0x42/0xa0 91.574872] [<ffffffff811877d1] user_path_at+0x11/0x20 91.574902] [<ffffffff8117c80a] vfs_fstatat+0x3a/0x70 91.574933] [<ffffffff81161cff] ? kmem_cache_free+0x2f/0x110 91.574965] [<ffffffff8117c85e] vfs_lstat+-x31/0x70 91.574993] [<ffffffff8117c9fa] sys_newlstat+0x1a/0x40 91.575022] [<ffffffff81176ee1] ? do_sys_open+0x171/0x220 91.575053] [<ffffffff8117cb1a] ? sys_readlinkat+0x7a/0xb0 91.575086] [<ffffffff81661ec2] system_call_fastpath+0x16/0x1b 91.575118] Code: 83 c1 01 40 84 ff 75 ef 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 00 55 31 c0 48 89 e5 66 2e 0f 1f 84 00 00 00 00 00 0f b6 14 07 <3a> 14 06 75 0f 48 83 c0 01 84 d2 75 ef 31 c0 5d c3 0f 1f 00 19 91.577243] RIP [<ffffffff81313514>] strcmp+0x14/0x30 91.579314] RSP <ffff8801f83f5bb8> 91.581385] CR2: 0000000000000000

    Read the article

  • Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition]

    - by Asian Angel
    As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Extra Note: We made a special effort to find wallpapers for this collection without the year “printed” on them, thus allowing for reuse as desired and/or needed beyond the 2010 – 2011 holiday. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more New Year’s desktop goodness be sure to check out our Happy New Year icon & font packs collection (link at bottom)! Note: This wallpaper will need to be placed on a larger white background in order to increase the height. Note: This wallpaper will need to be placed on a larger background in order to increase the width and height. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: The download size for the original version of this wallpaper is 15 MB. Note: The download size for the original version of this wallpaper is 15 MB. More Happy New Year Fun Desktop Fun: Happy New Year Icon and Font Packs For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Self-Executing Anonymous Function vs Prototype

    - by Robotsushi
    In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. Prototype : function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); Self-Closing Anonymous Function : //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = "Bacon Strips"; //Public Method skillet.fry = function() { var oliveOil; addItem( "\t\n Butter \n\t" ); addItem( oliveOil ); console.log( "Frying " + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( "Adding " + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = "12"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = "1 Cup"; //Public Method skillet.toString = function() { console.log( skillet.quantity + " " + skillet.ingredient + " & " + amountOfGrease + " of Grease" ); console.log( isHot ? "Hot" : "Cold" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. Update When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors of the use of the new keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the new keyword. For more information please see my answer.

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Oredev 2012: Summary and source code

    - by Laurent Bugnion
    This week, I had the pleasure to be invited to talk at Oredev, a really cool conference taking place in Malmo, Sweden. The whole event is awesome, including a very special dinner on Monday including sauna and swimming in a 6 degrees cold Baltic sea, and a reception with dinner at the town hall, including the mayor himself. Considering Malmo is a town of 300'000 inhabitants, it is a pretty nice occasion and the historical building itself is really worth seeing. For those interested, I placed my pictures on my Flickr account. I had a workshop on Tuesday morning about Windows 8 development with XAML/C#, and then a session on Wednesday about MVVM in Windows Phone 8 and Windows 8, of course using MVVM Light. I was very nervous because I reworked some of my demos as recently as this morning, in the wake of the Build conference last week and the release of both the Windows Phone SDK and MVVM Light V4.1. Everything went well however, and if I judge by the people I talked t after the talk, and Twitter, everything went pretty well. Before my talk on Tuesday, I had the pleasure to see a talk by Iris Classon (@irisclasson) on the challenges of being a "n00b" and a woman in software development. I especially appreciated her research and conclusions on the lack of women I our industry, a topic that is dear to my heart (because I want the best possible future for my two daughters, and also because I really enjoy working with women on projects, and getting a different insight on the art of software development. I really want to thank the excellent organization committee for their hard work and their fantastic welcome to Malmo. In particular Emily Holweck did a wonderful job and was super helpful throughout the preparation and the conference itself. I made a few pictures during my stay, all with the new Nokia Lumia 920, and hope you will enjoy them too. The source code and the slides… The source code is available for download from Skydrive. You will find the following: Windows 8 workshop slides. MVVM Applied slides Source code package with Win8Demo: The demo I built during the 4 hours workshop, with some light MVVM, web services (JSON), GridView, Design time data (Blend / Visual Studio designer), Bing maps integration, location sensor, Search pane integration. SemanticZoomSample: a sample I put together to demonstrate the SemanticZoom control, with two GridViews and of course full design time data for Blend work. Due to time constraints, I was not able to show this demo during the workshop, but I publish it anyway, hoping it will be useful to someone. PictureUploader: The demo I built during my 50 minutes session about MVVM Applied in Windows Phone 8 and Windows 8. Code sharing, design time data, MVVM Light are used in Windows Phone 8 and Windows 8 apps. And in video… You can also see the video of my MVVM talk thanks to the good services of the Oredev team! MVVM Applied in Windows Phone and Windows 8 from Øredev Conference on Vimeo.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • What You Said: Staying Productive While Working from home

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your telecommuting/work-from-home productivity tips. Now we’re back with a tips and tricks roundup; read on to see how your fellow readers stay focused at home. By far and away the most common technique deployed as carefully isolating work from home life. Carol writes: I love working from home and have done so for 6 years now. I have a routine just as if I was going to an office, except my commute is 12 steps. I get ready for work, grab my purse and smart phone and go to up the steps to my office. I maintain a separate phone line and voice mail for work that I cannot answer from anywhere but my work desk. I use call forwarding when I travel so only my office phone number is published. I have VOIP phone service so I can forward calls from the internet if I forget, or need to change where the phone is forwarded. I do have a wired and wireless head set so I can go get a cold drink if on one of those long boring conference calls. I plan my ‘get off from work’ time and try to stick to it, as with any job some days I am late getting off, but it all works out. I make sure my office is for work only, any other computer play time is in a different part of the house on different computer. My office has laptop, dock, couple of monitors, multipurpose printer, fax, scanner, file cabinets – just like the office at a company. I just also happen to have a couple of golden retrievers that come to work with me and usually lay quietly until 5, and yes they know it is 5pm sometimes before I do. For me, one of the biggest concerns when working from home is not being unproductive, but the danger of never stopping work. You could keep going and going because let’s face it – the company will let you do it, so I set myself up to prevent that and maintain a separateness. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >