Search Results

Search found 3515 results on 141 pages for 'energy saving'.

Page 101/141 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • Series On Embedded Development (Part 2) - Build-Time Optionality

    - by user12612705
    In this entry on embedded development, I'm going to discuss build-time optionality (BTO). BTO is the ability to subset your software at build-time so you only use what is needed. BTO typically pertains more to software providers rather then developers of final products. For example, software providers ship source products, frameworks or platforms which are used by developers to build other products. If you provide a source product, you probably don't have to do anything to support BTO as the developers using your source will only use the source they need to build their product. If you provide a framework, then there are some things you can do to support BTO. Say you provide a Java framework which supports audio and video. If you provide this framework in a single JAR, then developers who only want audio are forced to ship their product with the video portion of your framework even though they aren't using it. In this case, support providing the framework in separate JARs...break the framework into an audio JAR and a video JAR and let the users of your framework decide which JARs to include in their product. Sometimes this is as simple as packaging, but if, for example, the video functionality is dependent on the audio functionality, it may require coding work to cleanly separate the two. BTO can also work at install-time, and this is sometimes overlooked. Let's say your building a phone application which can use Near Field Communications (NFC) if it's available on the phone, but it doesn't require NFC to work. Typically you'd write one app for all phones (saving you time)...both those that have NFC and those that don't, and just use NFC if it's there. However, for better efficiency, you can detect at install-time if the phone supports NFC and not install the NFC portion of your app if the phone doesn't support NFC. This requires that you write the app so it can run without the optional NFC code and that you write your install app so it can detect NFC and do the right thing at install-time. Supporting install-time optionality will save persistent footprint on the phone, something your customers will appreciate, your app "neighbors" will appreciate, and that you'll appreciate when they save static footprint for you. In the next article, I'll talk about runtime optionality.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • HID USB Very Strange Problem...

    - by Lasanha
    I realy hope some one can help me here cause i search all over the web and nothing comes up.. I allways used PS/2 KB and Mouse and a USB KeyPad (Genius ErgoMedia 500 Gaming Explorer) to play some games, mmorpg, fps, you named, very good whit 11 keys whit possible macros etc etc... Now it comes the problem, i have a USB mouse that have 4 extra buttons, and i need more button, i love buttons.. Well, i plug in the USB mouse and disconectes de PS/2. Everything is ok until i toutch the mouse. If i do so, the ErgoMedia goes off, then on, then i mouse the mouse or press a button and all over again. Yesterday i went buying a new mouse that i liked, a USB mouse too (NPlay whit macros and all that stuff 3600dpi...) Hoping the problem was only whit the other mouse, but no.. It does the exact same thing, ErgoMedia keeps disconecting and conecting everytime i toutch the mouse. What i allready did: Update drivers of both mouses Update drivers of ErgoMedia (no specific drivers(Windows based)) Update drivers of MB Chipset (Actualy no, cause it was up to date allready) Trying other USB Ports (4 Ports back, 4 Ports Front and even 1 Port in 16 card slot device) Disable the "Allows Windows to shut down the energy bla bla" thing in Device Setings. Look up in the Device Setings only apear a problem on the ergomedia (Human interface Device) when i move the damm mouse.. Using Everest to read behavier, everything normal, exepts the disconecting thing, but no errors. Not a power suply, only the ErgoMedia and the mouse are in the USBs, and i allready disable the 16 card reader whit one usb slot to see.. Clean the IRQ registry. Look the entire internet for a fix solution. Help others problems wile looking for a fix for me (Im not a pro but not a completly stupid) Talking to you beggin you to help me as a last resorce... Machine: Acer M3641 Core2Quad 64x Based OS Vista 64b 4GbRAM HD Audio and Graphics I realy hope some one out there knows a fix for this, maybe it´s a simple thing, so simple that i´m to stupid to see that.. Sorry for my bad inglish but i write lot of erros even in my language. Any help will be very welcome. Tanks for ur concern and atention ^^

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE As per answer report, it eventually restarted itself.

    Read the article

  • Convert Custom Firefox Setup to Firefox Portable?

    - by dfree
    I have a pretty awesome firefox set up and spent a lot of time getting it perfect. Is there any way that anyone knows about to convert the entire configuration to portable? Programs like MozBackup are great for backing up the complete set up, but you can't restore a Firefox profile to Firefox portable (maybe there is a workaround to fake it out? or possibly another method?) In case anyone is interested here is the gist of the best add-ons I've found: Autopager (scroll down google and other multi page results without clicking next) Coral IE Tab (IE in firefox - in case a website 'insists' that you use IE) Cyber search (search google straight from the address bar - VERY HELPFUL) Download StatusBar (display progress of downloads in the bottom of ff - no annoying popups FireFTP (erases need for an external FTP client - opens in a tab) Gmail manager (if you use multiple gmail accounts) Session Manager (saving multiple sessions of tabs - ff session recover) Surf Canyon (pull relevant stuff out of the depths of search results - even from craigslist Tab Mix Plus (ESSENTIAL - tab behavior customization - have multiple rows of tabs I also have it set up so you can type 'g test' in the address bar and ff will pull up the google results for 'test'. Similarly have it set up for guitar tabs (tab), facebook (f), wikipedia (w), google maps from my house (gmhome), torrents (tor), ticketmaster (t), rotten tomatoes (rt), craiglist (c) plus about 20 other sites.

    Read the article

  • Java Embedded @ JavaOne: Q & A

    - by terrencebarr
    There has been a lot of interest in Java Embedded @ JavaOne since it was announced a short while ago (see my previous post). As this is a new conference we did get a number of questions regarding the conference. So we put together a brief Q & A on audience focus, dates, registrations, pricing, submissions, etc. Hope this helps and, remember, the Call for Papers ends next week, Jul 18th 2012! Cheers, – Terrence    Java Embedded @ JavaOne : Q & A  Q. Where can I learn more about “Java Embedded @ JavaOne”? A. Please visit: http://oracle.com/javaone/embedded Q. What is the purpose of “Java Embedded @ JavaOne”? A. This net-new event is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. Q. What broad audiences would benefit by attending “Java Embedded @ JavaOne”? A. Java licensees; Government agencies; ISVs, Device Manufacturers; Service Providers such as Telcos, Utilities, Healthcare, Energy, Smart Grid/Smart Metering; Automotive/Telematics; Home/Building Automation; Factory Automation; Media/TV; and Payment vendors. Q. What business titles would benefit by attending “Java Embedded @ JavaOne”? A. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads). Q. When is “Java Embedded @ JavaOne” taking place? A. The event takes place on Wednesday, Oct. 3th through Thursday, Oct. 4th. Q. Where is “Java Embedded @ JavaOne” taking place? A. The event takes place in the Hotel Nikko. Q. Won’t “Java Embedded @ JavaOne” impact the flagship JavaOne conference since the Hotel Nikko is one of the 3 flagship JavaOne conference’s venue hotels? A. No. Separate space in the Hotel Nikko will be used for “Java Embedded @ JavaOne” and will in no way impact scale and scope of the flagship JavaOne conference’s content mix. Q. Will there be a call for papers for “Java Embedded @ JavaOne”? A. Yes.  The call for papers has started but is ONLY for business focused submissions. Q. What type of business submissions can I make for “Java Embedded @ JavaOne”? A. We are accepting 3 types of business submissions: Best Practices: Java Embedded business solutions, methods, and techniques that consistently show results superior to those achieved with other means, as well as discussions on how Java Embedded can improve business operations, and increase competitive differentiation and profitability. Case Studies: Discussions with Oracle customers and partners that describe the unique business drivers that convinced them to implement Java Embedded as part of an infrastructure technology mix. The discussions will highlight the issues they faced, the decision making involved, and the implementation choices made to create value and improve business differentiation. Panel: Moderator-driven open discussion focused on the emerging opportunities Java Embedded offers businesses, as well as other topics such as strategy, overcoming common challenges, etc. Q. What is the call for papers timeline for “Java Embedded @ JavaOne”? A. The timeline is as follows: CFP Launched – June 18th Deadline for submissions – July 18th Notifications (Accepts/Declines) – week of July 29th Deadline for speakers to accept speaker invitation – August 10th Presentations due for review – August 31st Q. Where can I find more call for paper details for “Java Embedded @ JavaOne”? A. Please go to: http://www.oracle.com/javaone/embedded/call-for-papers/information/index.html Q. How much does it cost to attend “Java Embedded @ JavaOne”? A. The cost to attend is: $595.00 U.S. — Early Bird (Launch date – July 13, 2012) $795.00 U.S. — Pre-Registration (July 14 – September 28, 2012) $995.00 U.S. — Onsite Registration (September 29 – October 4, 2012) Q. Can an attendee of the flagship JavaOne event and Oracle OpenWorld attend “Java Embedded @ JavaOne”? ?A. Yes.  Attendees of both the flagship JavaOne event and Oracle OpenWorld can attend “Java Embedded @ JavaOne” by purchasing a $100.00 U.S. upgrade to their full conference pass. Filed under: Mobile & Embedded Tagged: Call for Papers, Java Embedded @ JavaOne, JavaOne San Francisco

    Read the article

  • Professional WordPress Business Themes

    - by Matt
    Every now and then JustSkins.com receives quote requests for WordPress design for business websites. Most companies now keep up to date with a blog on their corporate website, that showcases their day to day activities & progresses.  Getting such professional wordpress driven website designed from the scratch costs you a lot. If you have decided to make WordPress the CMS for your business website, there are some Professional WordPress themes you can take a look at. We have created this list to help you save some time to do all the trying and the testing. Optimize by WooThemes Last year one of the most popular Business theme by WooThemes was the Coffee Break theme, Optimize is further adaptation of the same. It is simple, sleek design with great functionality. The customizable front page lets you showcase your work or product etc. Demo | Price: $70, Developer Price: $150 | DOWNLOAD WooThemes is also offering their whole Business theme pack for a very very reasonable fee, If you like multiple designs from them you can get this big deal for only $125 Onyx , Impacto by Simple Themes Simple Themes has been making very crisp & beautiful WordPress Themes & are also very reasonably priced. If their themes solve your purpose $39 membership for 3 months is a good deal.  If you are looking to create quick website, landing page or micro site their templates are best. Demo | Price: $39 for 3 Months Membership Rejuvenate by Templatic One of the most beautiful Premium WordPress Theme, Available in 4 elegant color schemes. This theme can be used for your Beauty, Spa and Studio Business. Demo | Price: $65  | DOWNLOAD Templatic has created great professional business templates, such as Gourmet, Real Estate, Job Board, Automobile & lots More. You can also get a Best Value Offer in $299 for all of Templatic Themes. TheProfessional by ElegantThemes Elegant Themes is known to provide very beautiful & straightforward designs. The professional wordpress theme is a simple, crisp & concise Theme you can use to create a business website. The 3 short blurbs on the homepage are simple, which can be used to point them to your major offerings and the prominent slider indicates a clear call to action. There are 52 themes to choose from & Elegant Themes is giving a great offer at such a small yearly fee. Demo | Price: $39 Yearly Membership  | DOWNLOAD Elegant Themes has a cluster of 52 magnificent themes, and all you have to do is pay $39 to win access to all of them. Join today! Some of the Professional designs that I like for a business website are SimplePress and Corporation. Extatic by Chimera Themes The theme includes plenty of great features including custom feature tour pages, portfolio sections, static feature areas, pricing table page, 20+ shortcodes, multiple page/post options, unlimited custom sidebars which can be assigned to posts/pages, advanced theme style editor and options page and much more. Its a must buy Demo | Price: $37 | DOWNLOAD Corporate by Clover Themes Simple Theme for a small business. Corporate is an clean, powerful and feature-rich corporate theme with dynamic and energy design. Demo | Price: $69.95 | DOWNLOAD Bizco by Themify Bizco is a very professional template for wordpress targeted at corporate and product based businesses. This theme is simple yet highly functional and is suitable for showcasing features of your service or product. With the custom page template you can change the display of your pages and posts easily with our visual custom panel. Demo | Price: $70  |DOWNLOAD Devision by Themetrust Devision is a small business wordpress theme that can be used to make a business website within a few minutes. It makes it very easy to showcase and highlight your services or product on the homepage. Demo | Price: Euro 39 | DOWNLOAD BizPress by WPZoom A professional business WordPress theme from WPZoom suitable for companies, organizations, product showcases or other business websites. The theme comes with 4 colour options, featured products / services slider on the homepage, drop down menus, theme options page etc. Demo | Price: $ 69 | DOWNLOAD Clean Classy Corporate by ThemeFuse A very impressive WordPress business theme, that can be used in multiple ways. It is suitable for many kinds, like web products, services, hosting etc etc. Clean Classy Corporate WordPress Theme has a clean crisp look and is professional in appeal. Demo | Price: $49  | DOWNLOAD Insdustry by ThemeJam A powerful Business WordPress Template along with lots of options, colors, and customizable features. This is one for almost any kind of blogger, corporate, or organization. Lots of features, gives it the kind of scalability you might need to create any kind of website. Demo | Price: $ 59 | DOWNLOAD AppPress by ChimeraThemes This professional business WordPress theme includes 5 different colour schemes, advanced theme options page, multiple homepage sliders, custom widgets and page templates. The theme also includes a range of other unique features such as custom title, live style editor to modify colours, font styles, sizes etc, and 20+ shortcodes for creating pricing tables, content columns, boxes, buttons and others. Demo | Price: $ 37 | DOWNLOAD Why WordPress Professional Template? You can modify them, these usually come with a lot of fancy features that enable you to create the website as per your usability & choice. In some cases the  Premium WordPress business themes can be accessed through a subscription service. Premium Vs Free WordPress Themes There are very good Free WordPress themes out there that you can use to modify and code further or create what you want, but this possible when you are technically able. On the contrary Premium WordPress business themes offers great features & can save you a lot of time and money. It varies from business to business, some like to keep their website simple while most want to keep cool nifty features and abilities to scale it differently for various sections, products or categories. All this & more is possible with a Professional Business theme that is suitable/close to your needs.

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • SQLAuthority News – Don’t Be Afraid To Fool The World – Video by John Sonmez

    - by Pinal Dave
    Sometime some words and statements grabs your attention and it is hard to stop thinking about that after a while. Something similar happened a few days ago when I read the twitter statement of my friend and Pluralsight author John Sonmez. He twitted few days ago very interesting statement. “I don’t know a single successful person, who doesn’t deep down think that have the world fooled. #fooltheworld” by John Sonmez. When I read it, I was extremely intrigued by this statement. I read it many times, I shared with my family and I just could not stop interpreting this statement. It was indeed fun to read it again and again and there are so many different meanings one can take away from the statement. I know John very well, he is a  wonderful person and have very positive energy for the life. I just had to request him to build a video around it. Right after 5 days of my request, John created a wonderful video around this subject. I watched it multiple times as it was a wonderful video. I am not going to write about what was in the video much as I suggest you to watch the video itself. Here is one of the personal stories I want to share which is absolutely relevant to this video. I think my story 100% resonant the story of John. A Real Story from My Past Three years ago, I submitted a session in one of the SharePoint conference as a SQL Server session. My session was accepted and I prepared it very well. I put more than 2 month’s time to prepare for the session and I was very excited to present the session. I reached to the event place traveling thousands of the miles and I was very much excited to present the session. However, there was a little mixed up in the session. There were multiple session which were similar to my session title. One of the other speakers also had proposed a database related session and was selected. When the material went to print the printing team got confused and by mistake swapped the sessions. The other speaker got Performance with SQL Server session and I had received Performance with SharePoint session. IT was indeed a big mixed up but now that is how it was in the event guide and it was marketed the same way everything in the event. A Big Mix Up I had to talk with the event organizer and we come to the conclusion that we all had good intention but things just got mixed up and now was the time when “The show must go on“. I had a great amount of hesitation to go and present the session as I had personally never worked with Sharepoint so close in my life and my session abstracted talked about SharePoint tricks in depth. Two hours before the session I took the help of one of my friend and installed the SharePoint on my box. He showed me a few things here and there but it was never a good enough time to learn everything which I wanted to learn. The Moments of Confidence I was very scared and nervous to go on the stage as a SharePoint was not something I felt comfortable. However, I decided to go on stage with confidence as a SharePoint expert. Though I did not know SharePoint at the best, I had confidence that whatever I know is correct and I will not misguide people. I had no intention to fool people but I had no intention to accept that I am a fool and you all wasted your time and money to dedicate your time to attend my session. I decided to be honest but at the same time decided to take the session beyond my expertise. The sixty minutes of the session went very fine and I was able to manage all the difficult question at a satisfactory level. When the session was over my feeling was that I would have not presented or talked any different if I had more knowledge of the SharePoint at that time. I think it was one of my best sessions and it was reflected in the session feedback as well. I was the best speaker across all the track and my session had highest ranking. I was delighted and I learned a very valuable lesson. I must go beyond my limits and knowledge. I must aim higher and work harder. I should not lie but I should have confidence that I have a good heart and I put 100% in my efforts.  Lessions Learned Since this incident I have learned a lot about SharePoint and I am now a regular speaker at various SharePoint conferences along with SQL Server sessions. I am motivated and I am not afraid. I know people have lots of expectation from me but I have learned not to judge myself before I do my best. I leave the judgement of my efforts to my audience. I do not take the burden of the feedback on me, even though I know my audience have expected from me. I know what I know and I put my best. I must go out, if I fail, I learn from my mistake but I must keep my progress trajectory very high. As John said in the video, sometime success is not something we can achieve 100% but we can keep on going near to it. As long as we do not lose our focus from our goal and do not deviate from our progress path, we are doing things right. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Connection speed drops from 1 Gbps to 10 Mbps (Vista 64)

    - by Kevin Hakanson
    I recently got a Windows Home Server (HP MediaSmart Server EX490) setup so I could do backups and other things. However, I am having trouble on my Vista 64 PC. The backup will be making great progress, then it will just slow down. At one point, I noticed the lights on my Netgear GS105 indicated it was not using a 1000 Mbps connection, but a 10 Mbps one. I checked the Status of Local Area Connection (Intel(R) 82567V-2 Gigabit Network Connection) and that also showed the same slow speed. This has happened several times in the last couple days. When I disabled the network device, and then enabled it, it established the 1 Gpbs connection again. However, some of the times the Sent Bytes Activity on the Status windows indicate that the data flow is still slow (100 to 1000 bytes every couple seconds). Obviously, at this rate I could backup faster to floppy disk. :) My question is how to diagnose and fix this problem. When I look at the Administrative Events, I do a Errors: Bonjour Service 456: ERROR: read_msg errno 10054 (An existing connection was forcibly closed by the remote host.) And a Warning: e1yexpress Intel(R) 82567V-2 Gigabit Network Connection Link has been disconnected. I am suspicious there is some power saving mode. I found a post suggesting System Idle Power Saver(SIPS) may be the issue. I am going to try that, but looking for other suggestions or diagnostic advice. I have several new items in this configuration: server, client software, switch and cat6 cables.

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >