Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 564/903 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • When NOT to use a framework

    - by Chris
    Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not.

    Read the article

  • cpu usage nearly 100 constantly windows xp sp3

    - by user23954
    When running some heavy applications like games or virtual box for some time.. the cpu usage is normal for some 15 mins and then suddently cpu usage increases. even after i quit the heavy apps and when i start some other apps, the cpu usage of the new opened application is also very high.. this continues until i reboot the system. There is single particular process occupy more cpu. All the processes cpu usage is little high than normal.. Any solutions?

    Read the article

  • Installing FreeNAS 8.3 problems

    - by osij2is
    I'm trying to install FreeNAS 8.3 on some desktop-level hardware (AMD Phenom + 890FX + 16GB) and I've been unsuccessful. I initially tried using a USB stick and followed the instructions on the FreeNAS site here. Making the USB was simple as the instructions laid out, but as soon as the USB is detected (during the boot process) some text appears and quickly vanishes and my machine reboots infinitely. After trying several different was to make the USB, I tried using a DVD-ROM but again, I had the same issue as the USB stick. This leads me to conclude that either a BIOS setting is incorrect but I have no idea which one. I've changed the BIOS to not "fast" boot per se, and I've correctly configured the boot order per USB stick and the DVD-ROM drive so I know that it's working. Have I missed anything that might be causing this problem? I'm not a FreeBSD/FreeNAS expert by any means.

    Read the article

  • uWSGI and python virtual env

    - by user27512
    I'm trying to use uWSGI with a virtual env in order to use the Trac bug tracker on it. I've installed system-wide uwsgi via pip. Next, I've installed trac in a virtualenv $ virtualenv venv $ . venv/bin/activate $ pip install trac I've then written a simple uWSGI configuration script: [uwsgi] master = true processes = 1 socket = localhost:3032 home = /srv/http/trac/venv/ no-site = true gid = www-data uid = www-data env = TRAC_ENV=/srv/http/trac/projects/my_project module = trac.web.main:dispatch_request But when I try to launch it, it fails: $ uwsgi --http :8000 --ini /etc/uwsgi/vassals-available/my_project.ini --gid www-data --uid www-data ... Set PythonHome to /srv/http/trac/venv/ ... *** Operational MODE: single process *** ImportError: No module named trac.web.main unable to load app 0 (mountpoint='') (callable not found or import error) I think uWSGI isn't using the virtual env. When inside the virtual env, I can import trac.web.main without having an ImportError. How can I do that ? Thanks

    Read the article

  • Issue with broken disk on Solaris with raidctl - how to proceed

    - by weismat
    I have a SunFire T2000 server which has 2 mirrored disks pairs. The server required an exchange of the system battery. After swaping the battery first no disks were found. After booting from CD we managed to find the disks, but now one disk is broken and the raidctl reports a failed synchronisation. The boot process stops now when trying to mount the file systems. The power light of the broken drive is not even blinking. What is the best way to proceed now ? Fortunately I could live with loosing the data on the drive as it is backed up, but I would like to keep the rest of the data as it contains /etc and get the server booting again.

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • How to diagnose RAM?

    - by x-man
    I have a java process that is aborted after a while with SIGSEGV. It started to happen after I upgraded the server with more RAM. Having tested on different JVMs I suspect it might be a hardware problem. But no problem was detected by memtest86. So, what else can I do to detect the source of the problem is? Should I take the RAM modules one by one to detect the faulty module? The server is running on 64bit OpenSuse11.3. The memory is not an ECC one it seems. I have a kit of this (3*4GB * 2 = 24GB): http://www.kingston.com/datasheets/KHX1600C9S3K2_8GX.pdf

    Read the article

  • How to disable discrete GPU using NVIDIA drivers?

    - by penzoiders
    I have a DELL studio XPS 13 (aka 1340) as of 12.04 most things run smoothly out of the box, but I have some power draining and warmness issues (if not to be called terrible heat issues) The system came with a NVIDIA GeForce 9500M (which has Hybrid SLI) and it shows up in "lspci" as these 2 cards 02:00.0 VGA compatible controller: NVIDIA Corporation G98 [GeForce 9200M GS] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation C79 [GeForce 9400M G] (rev b1) I had to install nvidia-current over noveau driver 'cause noveau does freeze the system after suspension. By installing nvidia-current and running nvidia-xconfig the resume process after suspension is fixed. By the way both with nvidia-current and noveau the system drains a lot of battery and heats up a lot. I suppose this is because the discrete GPU is always on. I don't really need 3D graphics on this system, if not the minimal to run unity and compiz for window management. So my question is: How do I disable, using nvidia-current, the discrete GPU 9200M and use only the integrated one 9400M? notes: In BIOS I have no option to disable discrete GPU This I think is not applicable because of the suspension-freeze issue (with noveau): https://help.ubuntu.com/community/HybridGraphics I've found this but I don't know which --sli option I should choose to fit my needs: http://manpages.ubuntu.com/manpages/hardy/man1/nvidia-xconfig.1.html My system has not optimus or cuda, but anyone can tell me if bumblebee can work for me?

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • Trying to install PHP 5.2 on IIS/Win 2008 - Error 500

    - by Razor
    I have a fresh install of IIS 7 - I just added Web Platform Installer, and PHP 5.2 thru that. However, when trying to access to a simple test.php file (just has phpinfo() in it), I get the following list of errors: • IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly. • IIS was not able to process configuration for the Web site or application. • The authenticated user does not have permission to use this DLL. • The request is mapped to a managed handler but the .NET Extensibility Feature is not installed. The domain was created with dot net panel, but I don't think that has to do with this problem, unless maybe it uses a specific user? Maybe I need to add php thruough dot net panel? Any idea of what I'm doing wrong here?

    Read the article

  • Write permissions denied on linked tables between MS Access 2003 and 2007

    - by STEVE KING
    We are in the process of switching over to Access 2007. We have numerous data tables in Access 2003 files. In one case, the user has 2007 on his PC and opened the front end in 2007. No problems. When the the user is done, he clicks a button that executes a macro full of update queries. The macro reaches the first query and halts. We get a message saying we do not have permissions to write to this linked table (2003 format). There were no security files involved. We re-linked from 2007, same problem. LAN permssions were ok. I wound up having to import the tables to front end in order for the user to be able to do his job.

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • How can I send raw postscript to a remote printer via CUPS?

    - by Ash
    I have an ancient fax device with a printer interface that only accepts postscript level 1 documents formatted in a specific way. I only have access to this printer over the lpd:// protocol. I have some old documents from our previous system that work fine on our Unix machines, but they are altered somehow by CUPS when I use lp on our Linux system. The PDF files that end up in the print queue are significantly modified versions of the original, and although they still render in ghostscript, they don't do anything on the printer. I'm wondering if there's a way to tell CUPS "don't process this, just send it to the printer without modification", or whether there's a lpd client or procedure I could try?

    Read the article

  • Copy Ubuntu distro with all settings from one computer to a different one

    - by theFisher86
    I'd like to copy my exact setup from my computer at work to my computer at home. I'm trying to figure out how to go about doing that. So far I've figured this much out. On the source computer run dpkg --get-selections > installed-software and backup the installed-software file Backup /etc/apt/sources.list Backup /usr/share/applications/ to save all my custom Quicklists Backup /etc/fstab to save all my network mounts Backup /usr/share/themes/ to save the customization I've done to my themes I'm also going to backup my entire HOME directory. Once I get to the destination computer I'm going to first do just a fresh install of 11.10 Then I'll copy over my HOME directory, /etc/apt/sources.list, /usr/share/appications, /etc/fstab and /usr/share/themes/ Then I'm going to run dpkg --set-selections < installed-software Followed by dselect That should install all of my apps for me. I'm wondering if there's a way/need to backup dconf and gconf settings from the source computer? I guess that's my ultimate question. I'd also like any notes on anything else that might need backed up as well before I undertake this project. I hope this post is legit, I figured other people would be interested in knowing this process and I don't see any other questions that seem to really document this on here. I'd also like to further this project and have each computer routinely backup all the necessary files so that both computer are basically identical at all times. That's stage 2 though...

    Read the article

  • How would you model an objects representing different phases of an entity life cycle?

    - by Ophir Yoktan
    I believe the scenario is common mostly in business workflows - for example: loan management the process starts with a loan application, then there's the loan offer, the 'live' loan, and maybe also finished loans. all these objects are related, and share many fields all these objects have also many fields that are unique for each entity the variety of objects maybe large, and the transformation between the may not be linear (for example: a single loan application may end up as several loans of different types) How would you model this? some options: an entity for each type, each containing the relevant fields (possibly grouping related fields as sub entities) - leads to duplication of data. an entity for each object, but instead of duplicating data, each object has a reference to it's predecessor (the loan doesn't contain the loaner details, but a reference to the loan application) - this causes coupling between the object structure, and the way it was created. if we change the loan application, it shouldn't effect the structure of the loan entity. one large entity, with fields for the whole life cycle - this can create 'mega objects' with many fields. it also doesn't work well when there's a one to many or many to many relation between the phases.

    Read the article

  • Pull Request Conversations, Inline Diff Enhancements

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Conversations Previously, the only way for project members and users who submitted pull requests to converse was via e-mail. This complicated the review process and made conversations isolated and difficult to track. For this release, we’ve added functionality that enables you to have those same conversations within the pull request page. When you view a pull request, you’ll now see “Comments” and “Changes” tabs, with current comments displayed. Inline Diff Enhancements We tweaked the inline diff experience to make it easier to traverse diff blocks. When you open up the inline diff experience, you’ll now see up and down arrows. To move between the diff blocks, you can use those arrows or utilize the available keyboard shortcuts. Lastly, we have also brought the inline diff experience to the source control changes page for project and fork changesets. You can see both enhancements live by viewing the associated pull request or changeset changes on WikiPlex. The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • windows 8 + Ubuntu dual boot

    - by Jack Yuan
    I installed Ubuntu 13.04 on Windows 8. Yes I can access both of them, but the process is kind of long. In BIOS, EFI is for Windows 8, legacy support is for Ubuntu. If I choose EFI first, the startup just go straight to Win8 without offering me a choice. If I choose legacy first, the starup will offer me a choice between win8 and ubuntu. But I can only choose Ubuntu. If i choose win8, there will be a mistake(file missing under configuration). That is to say, every time i wanna switch to another OS, I have to go into BIOS and change the priority settings. I heard something about secure boot might be the cause of this situation. But the thing is that there is not even an option called "secure boot" in my BIOS, which means i cannot disable it. All I want is that an option menu appears everytime i turn on my computer so i can easily choose what OS I want for today. Can anyone help me plz? Thank you very much!!

    Read the article

  • Designing communications for extensibility

    - by Thomas S.
    I am working on the design stages of an application that will a) collect data from various sources (in my case that's scientific data from serial ports), keeping track of the age of the data, b) generate real-time statistics (e.g. running averages) c) display, record, and otherwise handle the data (and statistics). I anticipate that I will be adding both data producers and consumers over time, and would like to design this application abstractly so that I will be able to trivially add functionality with a small amount of interface code. What I'm stumbling on is deciding what communication infrastructure I should use to handle the interfaces. In particular, how should I make the processed data and statistics available to multiple consumers? Some things I've considered: Writing to several named pipes (variable number). Each consumer reads from one of them. Using FUSE to make a userspace filesystem where a read() returns the latest line of data even if another process has already read it. Making a TCP server, and having consumers connect and request data individually. Simply writing the consumers as part of the same program that aggregates the data. So I would like to hear your all's advice on deciding how to interface these functions in the best way to keep them separate and allow room for extenstions.

    Read the article

  • Quick and Good: ( Requirement -> Validation -> Design ) for self use?

    - by Yugal Jindle
    How to casually do the required Software Engineering and designing? I am an inexperienced developer and face the following problem: My company is a start up and has no fix Software engineering systems. I am assigned tasks with not very clear and conflicting requirements. I don't have to follow any designs or verify requirements officially. Problem: I code all day and finally get stuck where requirement conflicts and I have to start over again. I can-not spend a lot of time doing proper SRS or SDD. How should I: List out Requirements for myself. (Not an official document) How to verify and validate the requirements? How to visualize them? How to design them with minimum effort? (As its going to be with me only) I don't want to waste my time coding something that's gonna collapse according to requirement conflict or something! I don't want to compromise with quality but don't want to re-write everything on some change that I didn't expected. I imagine making a diagram for my thought process that will show me conflict in the diagram itself, then finally correcting the diagram - I decide my design and structure my code in terms of interfaces or something. And then finally start implementing my design. I am able to sense the lack of systematic approach, but don't know how to proceed! Update: Please suggest me some tools that can ask me the questions and help me aggregate important details. How can I have diagram that I talked about for requirement verification?

    Read the article

  • frequent abnormal shutdowns/system crashes

    - by user110353
    It's been almost 5 days since I have installed Ubuntu and almost 6th time that my laptop has been crashed entirely and it shuts down abnormally. Actually, it heats up and I have to wait for 20 odd minutes before I can turn it on again. A message appears that my PC crashed due to overheating which may damage my hard disk. The crashes happened when I tried to open some application that freeze my PC not even giving me enough time to go to system monitor and end process. Sometimes the culprit application which caused crash is Ever-pad, sometime it's team-viewer, sometimes it's some other. This is something very serious. The last crash occurred at 09:14:40. Kindly click here to view system log. I want to stick to Ubuntu and the same laptop as I had serious issues with Windows and I nearly went out to dump my laptop and purchase a more powerful system. Below are my hw/os specs. Kindly advice on how to resolve this issue Ubuntu 12.10 Kernal 3.5.0-18-generic GNOME 3.6.0 Memory 2.0GB Processor: Genuine Intel CPU [email protected] x 2 Available Disk Space: 63.7 GB Thanks in advance

    Read the article

  • Unable to see hidden files

    - by TheGoodUser-Sp
    I have BitDefender (With last update) on my Windows-7. I want to see hidden files, so from Tools > Folder Options > View , I change the settings as below, and click on OK: But I can't see hidden files. when I double check the Options, I see the settings changed automatically as below : I know this is a virus-like application! checking by a virus total via ProcessExplorer didn't help : How can I understand with process is related to this issue?

    Read the article

  • Deleting slow on X11 emacs

    - by Malvolio
    I'm running GNU Emacs 21.4.1 on a (remote) remote Linux ((CentOS) box, using my MacBook as the X-server. Works fine, unless I try to delete a word, line, or region. Then it locks up for 30 seconds or so. It sounds like a minor thing, but you realize how often you do a delete when you have to stop for 30 seconds every time. My theory is that Emacs is trying to put the text in the X-server cut-and-paste buffer, which is trying to put it in the OSX cut-and-paste buffer and somewhere along the way, the process is blocked until it times out. (My only evidence for this theory is (a) copy-region behaves the same way and (b) deleted text doesn't show up in the buffer.) Any suggestions appreciated. Edit: (setq interprogram-cut-function nil) fixed me right up. Which makes perfect sense. Thanks, Trey.

    Read the article

  • Windows 7 Pro x64 suffering from insomnia

    - by gemisigo
    My Win 7 Pro x64 is unable to enter sleep mode. Pressing the sleep button results in strange state where the computer starts entering the sleep mode, the display is turned off but then the whole process stops, it does not switch off the hdd, the keyboard still responds to numlock/capslock, but the display does not on anymore. I have to cut power and restart. If I remove everything (wireless mouse, external hdd, usb hub, etc.) it does enter sleep mode, but if there is something left, it does not, though every one of them has "Allow the computer to turn off this device..." checked and "Allow this device to wake the computer" unchecked. Why does it not work? It was working properly in the RC.

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >