Search Results

Search found 20155 results on 807 pages for 'things'.

Page 323/807 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Multi-monitor aterm transparency

    - by Bryan Ward
    I have 3 monitors which I set the background with using xpmroot my-5760x1200bg.png I then setup aterm to use transparency by adding the following to my ~/.Xdefaults file. aterm*transparent:true aterm*shading:60 aterm*background:Black aterm*foreground:White aterm*scrollBar:true aterm*scrollBar_right:true aterm*transpscrollbar:true aterm*saveLines:32767 aterm*font:*-*-fixed-medium-r-normal--*-140-*-*-*-*-iso8859-1 aterm*boldFont:*-*-fixed-bold-r-normal--*-*-140-*-*-*-*-iso8859-1 I am getting transparency on my aterm windows, but the image that is coming through with the transparency isn't correct. On the left monitor things are fine, but the middle and right monitors both seem to use the leftmost 1920x1200 of the background image as what is behind the terminal window. It would be as if every screen had the same background as the monitor on the left. Is this something that can be configured to be correct, or is this a bug? I'm running Gentoo Linux with Xmonad.

    Read the article

  • Unable to run VMs on hyper-v

    - by PRAWAT-DS
    Folks/Mates, I need some advise and assistance regarding the testing of Hyper-V. Here is my h/ware configuration: 1) Intel i5 processor (i5-750) 2) Intel M/B DP55WB 3) 6 GB DDR3 RAM OS = Server 2008 R2 Standart (evaluation copy). I installed 2008 r2 on my machine and added hyper-v role to it. I created 2 VMs and installed OS. But after finishing the OS installation the VMs are not booting up. After finishing the OS installation, the VM reboots automatically (normal behaviour) and shows "preparing your system for first time" after that it reboots and didn't come online. Few things to notice, when I am running "securable" on my server 2008 R2 OS it shows that processor is not supporting h/ware virtulization, but (since my desktop is dual boot) when I am running "securable" on my windows 7 OS, it shows that process "does" supports hardware virtulization. VT option is already enabled in BIOS. Any help and suggestions are highly appreciated :) Thanks in advance. Pradeep Rawat

    Read the article

  • Azure VM with many IPs or SSL certificates

    - by timmah.faase
    I am looking to move our hosting environment to Azure and by doing so have created a sandpit VM to figure things out. We host around 300-400 websites in IIS and about 2% of these sites have unique, non wildcard certificates all requiring a unique public IP in our current setup. Can you get a range of IPs pointing to 1 VM/Endpoint? Or is it possible to create an SSL proxy? I've never created an SSL proxy but like the idea of it. I'd need advise here on how to proceed if this is the best option. Sorry if this has been answered! Sorry also if my question isn't worded eloquently.

    Read the article

  • update manager in ubuntu 12.10 can't access repository, but software center can

    - by user103597
    For some reason, whenever I try to search for updates with Ubuntu 12.10's update manager, I always get this error: Failed to download repository information, followed by the following details: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/i18n/Translation-en Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/i18n/Translation-en_CA Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/source/Sources Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/binary-amd64/Packages Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/binary-i386/Packages Unable to connect to extras.ubuntu.com:http: , E:Some index files failed to download. They have been ignored, or old ones used instead. Initially I thought that for whatever reason the repositories were down, so I switched from the Canada server to the main server. I still got the same error. I also tried installing some things from the ubuntu software center. Funny thing is, that worked fine and I was able to successfully download and install software from the software center, so it seems that only update manager can't access the repositories. I have searched for and found similar cases (relating to ubuntu 12.10), but most of those cases involved ppa's, and I don't use any ppa's. Help would be appreciated. Thanks.

    Read the article

  • Tracking changes to firewall configs?

    - by jmreicha
    Myself and one other indivdual will be taking over some of the daily firewall management duties soon and I'm looking for a way to track changes on our firewall configurations for auditing purposes and need some ideas on a good way to track changes the changes that are made. I don't have a lot of specific criteria but here are some of the basic things I would like to be able to do: Access to previous revisions of firewall configs Access to changes made and by whom When specific changes were made I'm wondering if some sort of revision control software would work here as a way to track the the changes? Or if some other approach would work better for managing the change control in this situation. I'm open to any and all suggestions at this point. EDIT: We are using a Checkpoint pair, one passive one active configuration. I will update again with specific model numbers when I get a chance.

    Read the article

  • How to prevent Mac OS X creating .DS_Store files on non Mac (HFS) Volumes?

    - by sudo petruza
    Is there a way to prevent Mac OS X creating .DS_Store and other hidden meta-files on foreign volumes like NTFS and FAT? I share an NTFS partition with data like Thunderird & Firefox's profiles and apache's DocumentRoot, between Mac OS X and Windows, which is very handy. I don't mind if Mac OS X is not capable of indexing or otherwise doing the neat things those metafiles are for. Note: It's not shared over a network, both operating systems and the shared partition coexist on the same disk, on the same machine.

    Read the article

  • Proper way to let user enter password for a bash script using only the GUI (with the terminal hidden)

    - by MountainX
    I have made a bash script that uses kdialog exclusively for interacting with the user. It is launched from a ".desktop" file so the user never sees the terminal. It looks 100% like a GUI app (even though it is just a bash script). It runs in KDE only (Kubuntu 12.04). My only problem is handling password input securely and conveniently. I can't find a satisfactory solution. The script was designed to be run as a normal user and to prompt for the password when a sudo command is first needed. In this way, most commands, those not requiring sudo rights, are run as the normal user. What happens (when the script is run from the terminal) is that the user is prompted for their password once and the default sudo timeout allows the script to finish, including any additional sudo commands, without prompting the user again. This is how I want it to work when run behind the GUI too. The main problem is that using kdesudo to launch my script, which is the standard GUI way, means that the entire script is executed by the root user. So file ownerships get assigned to the root user, I can't rely upon ~/ in paths, and many other things are less than ideal. Running the entire script as the root user is just a very unsatisfactory solution and I think it is a bad practice. I appreciate any ideas for letting a user enter the sudo password just once via GUI while not running the whole script as root. Thanks.

    Read the article

  • Programmer friendly non-voxel art styles?

    - by Overv
    Like many other programmers I've always wanted to make a game, but simply lack the skills to do any production quality graphics. I am however sure that I want to do the models and textures myself, because I need a lot of different objects and I am sure I wouldn't be able to find good matching models on 3D sites. That means I'll have to pick an art style that is "simple", programmer friendly. An extreme example of this is of course Minecraft, but I don't want to go that basic. I'm absolutely against creating a voxel game. What kind of art styles are out there that are relatively simple, i.e. things made out of basic shapes and textures, but are still good enough to form a believable and detailed world? An example of what I mean is wind waker. The objects are formed of relatively simples shapes, but still provide enough detail to create a nice, living world. The environment my game is set in is a city environment. What I'm really asking for here are good examples of "simple" art styles applied in practice, so I can choose one that fits my skills.

    Read the article

  • XBMC DVB-T and Played Video filling screen

    - by Tubs
    I have a small PC running as an XBMC attached to a Samsung le37m87bd. The PC isn't powerful enough to output a full HD 1080 image at 1920x1080 which is the TVs native resolution (about every 1/2 a second things go extra fast, I assume skipping frames), so I want to reduce this. Annoyingly, the TV does not support any other widescreen resolutions. (720 etc) If I use a a 4:3 image, the TV stretches it to be the 16:9, however, all 16:9 content is stretched sideways as XBMC is sending out a 4:3 image with a 16:9 image inside. Is there anyway I can force XBMC to compensate for this, ie stretch vertically so that black bars are removed, but not stretched horizontally?

    Read the article

  • How to host customer developed code server side

    - by user963263
    I'm developing a multi-tenant web application, most likely using ASP.NET MVC5 and Web API. I have used business applications in the past where it was possible to upload custom DLL's or paste in custom code to a GUI to have custom functions run server side. These applications were self hosted and single-tenant though so the customer developed bits didn't impact other clients. I want to host the multi-tenant web application myself and allow customers to upload custom code that will run server side. This could be for things like custom web services that client side JavaScript could interact with, or it could be for automation steps that they want triggered server side asynchronously when a user takes a particular action. Additionally, I want to expose an API that allows customers' code to interact with data specific to the web application itself. Client code may need to be "wrapped" so that it has access to appropriate references - to our custom API and maybe to a white list of approved libraries. There are several issues to consider - security, performance (infinite loops, otherwise poorly written code, load balancing, etc.), receive compiled DLL's or require raw code, etc. Is there an established pattern for this sort of thing or a sample project anyone can point to? Or any general recommendations?

    Read the article

  • Adopting Technologies for the Sake of Technologies

    - by shiju
    Unlike other engineering industries, the software engineering industry is really lacking maturity. The lack of maturity can see in different aspects of entire software development life cycle. I think other engineering industries are well organised and structured with common, proven engineering practices. The software engineering industry is greatly a diverse industry with different operating systems, and variety of development platforms, programming languages, frameworks and tools. Now these days, people are going behind the hypes and intellectual thoughts without understanding their core business problems and adopting technologies and practices for the sake of technologies and practices and simply becoming a “poster child” of technologies and practices. Understanding the core business problem and providing best, solid solution with a platform neutral approach, will give you more business values and ROI, instead of blindly adopting technologies and tailor-made your applications for the sake of technologies and practices. People have been simply migrating their solutions in favour of new technologies and different versions of frameworks without any business need. The “Pepsi Challenge” in the Software Development  Pepsi Challenge marketing campaign of the 1980s was a popular and very interesting marketing promotion in which people taste one cup of Pepsi and another cup with Coca Cola. In the taste test, more than 50% of people were preferred Pepsi  over Coca Cola. The success story behind the Pepsi was more sweetness contains in the Pepsi cola. They have simply added more sugar and more people preferred more sweet flavour. You can’t simply identify the better one after sipping one cup of cola based on the sweetness which contains. These things have been happening in the software industry for choosing development frameworks and technologies. People have been simply choosing frameworks based on the initial sugary feeling without understanding its core strengths and weakness. The sugary framework might be more harmful when you develop real-world systems. There is not any silver bullet for solving all kind of problems and frameworks and tools do have strengths and weakness. So it would be better to understand their strength and weakness. And please keep in mind that you have to develop real apps to understand the real capabilities and weakness of a framework. Evaluating a technology based on few blog posts will harm your projects and these bloggers might be lacking real-world experience with the framework. The Problem with Align a Development Practice with Tools Recently I have observed a discussion in a group where one guy asked suggestions for practicing Continuous Delivery (CD) as part of the agile based application engineering. Then the discussion quickly went to using and choosing a Continuous Integration (CI) tool and different people suggested different Continuous Integration (CI) tools for simply practicing Continuous Delivery. If you have worked with core agile engineering practices, you could clearly know that the real essence of agile is neither choosing a tool nor choosing a process. By simply choosing CI tool from a particular vendor will not ensure that you are delivering an evolving software based on customer feedback. You have to understand the real essence of a engineering practice and choose a right tool for practicing it instead of simply focus on a particular tool for a practicing an development practice. If you want to adopt a practice, you need a solid understanding on it with its real essence where tools are just helping us for better automation. Adopting New Technologies for the Sake of Technologies The another problem is that developers have been a tendency to adopt new technologies and simply migrating their existing apps to new technologies. It is okay if your existing system is having problem  with a technology stack or or maintainability challenge with existing solution, and moving to new technology for solving the current problems. We have been adopting new technologies for solving new challenges like solving the scalability challenges when the application or user bases is growing unpredictably. Please keep in mind that all new technologies will become old after working with it for few years. The below Facebook status update of Janakiraman, expresses the attitude of a typical customer. For an example, Node.js is becoming a hottest buzzword in the software industry and many developers are trying to adopt Node.js for their apps. The important thing is that Node.js is a minimalist framework that does some great things for some problems, but it’s not a silver bullet. I have been also working with Node.js which is good for some problems, but really bad for choosing it for all kind of problems. By adopting new technologies for new projects is good if we could get real business values from it because newer framework would solve some existing well known problems and provide better solutions where it can incorporate good solutions for the latest challenges . But adopting a new technology for the sake of new technology is really bad idea. Another example is JavaScript is getting lot of attention so that lot of developers are developing heavy JavaScript centric web apps. First, they will adopt a client-side JavaScript MV* framework from AngularJS, Ember, Backbone etc, and develop a Single Page App(SPA) where they are repeating the mistakes we did in the past with server-side. The mistakes we did in the server-side is transforming to client-side. The problem is that people are just adopting new technologies, but not improving their solutions. I predict that many Single Page App will suck in the future. We need a hybrid approach where we should be able to leverage both server-side and client-side for developing next-generation web apps. The another problem is that if you like a particular framework, use it for all kind of apps. In the past, I know some Silverlight passionate guys were tried to use that framework for all kind of apps including larger line of business apps. And these days developers are migrating their existing Silverlight apps in favour of HTML5 buzzword. So the real question is, what is the business values we are getting from these apps when we are developing it for the sake of a particular technology instead of business need. The another problem is that our solutions consultants are trying to provide unnecessary solutions for the sake of a particular technology or for a hype. For an example, Big Data solutions are great for solving the problem of three Vs : volume, velocity and variety. But trying to put this for every application will make problems. Let’s say, there is a small web site running with limited budget and saying that we need a recommendation engine for the web site with a Hadoop based solution with a 16 node cluster, would be really horrible. If we really need a Hadoop based solution, got for it, but trying to put this for all application would be a big disaster. It would be great if could understand the core business problems first, and later choose a right framework for providing solutions for the actual business problem, instead of trying to provide so many solutions. The Problem with Tied Up to a Platform Vendor Some organizations and teams are tied up with a particular platform vendor where they don’t want to use any product other than their preferred or existing platform vendor. They will accept any product provided by the vendor regardless of its capability. This will lets you some benefits regards with integration and collaboration of different products provided by the same vendor, but it will loose your opportunity to provide better solution for your business problems. For a real world sample scenario, lot of companies have been using SAP for their ERP solutions. When they are thinking about mobility or thinking about developing hybrid mobile apps, they can easily find out a framework from SAP. SAP provides a framework for HTML 5 based UI development named SAPUI5. If you are simply adopting that framework only based for the preference of existing platform vendor, you might be loose different opportunities for providing better solution. Initially you might enjoy the sugary feeling provided by the platform vendor, but you have to think about developing apps which should be capable for solving future challenges. I am not saying that any framework is not good and I believe that all frameworks are good over another one for solving at least one problem. My point is that we should not tied up with any specific platform vendor unless your organization is having resource availability problems. Being Polyglot for Providing Right Solutions The modern software engineering industry is greatly diverse with different tools and platforms. Lot of open source frameworks and new programming languages have been releasing to the developer community, where choosing the right platform without any biased opinion, is really a difficult task. But it would really great if we could develop an attitude with platform neutral mindset and being a polyglot developer for providing better solutions based on the actual business problems. IMHO, we should learn a new programming language and a new framework every year. This will improve the quality of our developer capabilities and also improve the quality of our primary programming language skills. Being polyglot for individual developers and organizational teams will give you greater opportunity to your developer experience and also for your applications. Organizations can analyse their business problem without tied with any technology and later they can provide solutions by choosing different platform and tools. Summary    In this blog post, what I was trying to say that we should not tied up or biased with any development platform, technology, vendor or programming language and we should not adopt technologies and practices for the sake of technologies. If we are adopting a technology or a practice for the sake of it, we are simply becoming a “poster child” of the technology and practice. We should not become a poster child of other people’s intellectual thoughts and theories, instead of it we should become solutions developers and solutions consultants where we should be able to provide better solutions for the business problems. Being a polyglot developer is a good idea for improving your developer skills which lets you provide better solutions for the business problems. The most important thing is that we should become platform neutral developers where our passion should be for providing brilliant solutions. It would be great if we could provide minimalist, pragmatic business solutions. You can follow me on Twitter @shijucv

    Read the article

  • Best practice for assigning private IP ranges?

    - by Tauren
    Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

    Read the article

  • How should I implement a command processing application?

    - by Nini Michaels
    I want to make a simple, proof-of-concept application (REPL) that takes a number and then processes commands on that number. Example: I start with 1. Then I write "add 2", it gives me 3. Then I write "multiply 7", it gives me 21. Then I want to know if it is prime, so I write "is prime" (on the current number - 21), it gives me false. "is odd" would give me true. And so on. Now, for a simple application with few commands, even a simple switch would do for processing the commands. But if I want extensibility, how would I need to implement the functionality? Do I use the command pattern? Do I build a simple parser/interpreter for the language? What if I want more complex commands, like "multiply 5 until >200" ? What would be an easy way to extend it (add new commands) without recompiling? Edit: to clarify a few things, my end goal would not be to make something similar to WolframAlpha, but rather a list (of numbers) processor. But I want to start slowly at first (on single numbers). I'm having in mind something similar to the way one would use Haskell to process lists, but a very simple version. I'm wondering if something like the command pattern (or equivalent) would suffice, or if I have to make a new mini-language and a parser for it to achieve my goals?

    Read the article

  • How to convert PowerPoint presentations into a Kindle/E-reader friendly form?

    - by Shiki
    I have a lot of documents in .ppt and .pptx (blame the co-workers). I would like to read them on way home or elsewhere... when I have a little time to catch up with things. One thing I could do with the documents is cutting them together into one file. But saving that one even if a smaller version of PDF (according to Office 2010) results in a huge file. And PDF is hardly readable on a Kindle. I would need something .epub free, easy-on-the-device way. Is there such a thing? (Manually I could copy all the images down into native text and whatnot and create new presentations, save those, convert them. But that would just take a lot of time.)

    Read the article

  • Can I put /tmp and /var/log in a ramdisk on OS X?

    - by kbyrd
    For non-critical Linux systems, I often move things /tmp and /var/log to tmpfs to save on some disk writing. I've been doing this for a year or so and if I ever need the logs across reboots, I just comment out a line in /etc/fstab and then start debugging. In any case, I would like to do the same thing on OS X. I've seen posts on creating a ramdisk for OS X, but I'm looking for a more permanent solution. I always want /tmp and /var/log mounted in a ramdisk, with the ability to turn that off with a bit of cmdline editing in vi if I have to.

    Read the article

  • New LAMP server, all links redirecting to localhost

    - by serilain
    I've got a very frustrating issue with what should be a bespoke install of Ubuntu 12.04, the LAMP config provided in apt-get install lamp-server^, and a web application called The Fascinator. After installing those three things and making no changes to any of them, I can access the application through a public IP (http://lib-hf1.lib.sfu.ca:9997 for the curious), but the domain of every link within that page is changed to localhost, including links to images and CSS, so nothing loads correctly and all of the links are broken. I've Googled around and found some people who appear to be having this issue with WP and Drupal, but nothing makes reference to a system-wide setting, and no one using the Fascinator seems to be having this issue. I have a faint memory that this might have something to do with mod_rewrite, but I'm pretty well stumped.

    Read the article

  • What You Said: How You Track Your Time

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite time tracking tips, tricks, and tools. Now we’re back to highlight the techniques HTG readers use to keep tabs on their time. While more than one of you expressed confusion over the idea of tracking how you spend all your time, many of you were more than happy to share the reasons for and the methods you use to stay on top of your time expenditures. Scott uses a fluid and flexible project management tool: I use kanbanflow.com, with two boards to manage task prioritisation and backlog. One board called ‘Current Work’ has three columns ‘Do Today’, ‘In Progress’ and ‘Done’. The other is called ‘Backlog’, which splits tasks into priority groups – ‘Distractions (NU+NI)’, ‘Goals (NU+I)’, ‘Interruptions (U+NI)’, ‘Interruptions (U+NI)’ and ‘Critical (U+I)’, where U is Urgent and I is Important (and N is Not). At the end of each day, I move things from my Backlog to my ‘Current Work’ board, with the idea to keep complete Goals before they become Critical. That way I can focus on ‘Current Work’ Do Today so I don’t feel overwhelmed and can plan my day. As priorities change or interruptions pop up, it’s just a matter of moving tasks between boards. I have both tabs open in my browser all day – this is probably good for knowledge workers strapped to their desk, not so good for those in meetings all day. In that case, go with the calendar on your phone. While the above description might make it sound really technical, we took the cloud-based app for a spin and found the interface to be very flexible and easy to use. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Fixing the position of items in vim's statusline

    - by ldigas
    My statusline looks something like this: set statusline+=%m set statusline+=b%n: " set statusline+=%f set statusline+=%F set statusline+=%R set statusline+=%Y set statusline+=\ set statusline+=[ set statusline+=row\ %l/%L set statusline+=,\ " set statusline+=column\ %c\ (%v) set statusline+=column\ %v\ (%c) set statusline+=] which, on an average day, when there is no clouds, gives something like this: [-]b3:options.txt,RO,HELP [row 6291/7778, column 42 (29)] Now, when I go about splitting windows, and opening different files, some of them modified, some of them not, the things in the statusline start to wiggle back and forth, and it annoys me to no end. I saw in vim's help (:help 'statusline) that one can set a fixed width of some items. How would you go about fixing the above items in a way, that if one item is missing, or no matter of its width, that it doesn't affect the other ones ? (i.e. so I can always look at a known position and know what is there ... not move my eyes left and right searching for the thing I need).

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • How to save Word documents as HTML to be viewed in Firefox

    - by private_meta
    I'm in need for saving a Word document as HTML. It has some background images, other images, texts, ... It opens correctly in Internet Explorer, but how can I save a word doc as HTML so that Firefox and other current browsers render it correctly? All images are missing in the document. I looked through the generated html document, but the paths for the images appear to be correct. Any idea? Things like "Don't save docs as html" won't be helpful here. Edit: To make myself clear, the normal "Save as HTML" doesn't cut it, the result is broken in any browser other than Internet Explorer. Edit 2: What I'm using is Word 2010 and Firefox 4. I also tried rendering it in the latest Chrome version, which failed as well. I used different compatibility settings for saving as html, it did not help

    Read the article

  • Contribute to GlassFish in Five Different Ways

    - by arungupta
    GlassFish has a lot to offer from Java EE 6 compliance, HA & Clustering, RESTful administration, IDE integration and many other features. However a recent blog by Markus, a GlassFish Champion, said something different: Ask not what GlassFish can do for you, but ask what you can do for GlassFish! Markus explained how you can easily contribute to GlassFish without being a programming genius. The preparatory steps are simple: • First of all: Don't be afraid! • Prepare yourself - Get up to speed! And then specific suggestions with cross-referenced documents: • Review, Suggest and Add Documentation! • Help Others - be a community hero! • Find and File Bugs on Releases! • Test-drive Promoted Builds and Release Candidates! • Work with Code! Get things done! Are you ready to contribute to GlassFish ? Read more details in Markus's blog.

    Read the article

  • Programming by dictation?

    - by Andrew M
    ie. you speak out the code, and someone else across the room types it in Anyone tried this? Obviously the person taking the dictation would need to be a coder too, so you didn't have to explain everything and go into tedious detail (not 'open bracket, new line...' but more like 'create a new class called myParser that takes three arguments, first one is...'). I thought of it because sometimes I'm too easily distracted at my computer. Surrounded by buttons, instant gratification a click away, the world at my fingertips. To get stuff done, I want to get away, write my code on paper. But that would mean losing access to necessary resources, and necessitate tedious typing-up later on. The solution? Dictate. Pros: no chance to check reddit, stackexchange, gmail, etc. code while you pace the room, lie down, play billiards, whatever train your brain to think more abstractedly (have to visualize things if you can't just see the screen) skip the tedious details (closing brackets etc.) the typist gets to shadow a more experienced programmer and learn how they work the typist can provide assistance/suggestions external pressure of typist expecting instructions, urging you to stay focussed Cons might be too hard might not work any better rather inefficient use of assisting programmer need to find/pay someone to do this

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >