Search Results

Search found 21395 results on 856 pages for 'process accounting'.

Page 504/856 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Looking for Kiosk-style / camera store easy photo memory card to CD/DVD burning program for Windows-7 Notebook? For non techie user.

    - by Rob
    I'm looking for a Kiosk-style / camera shop easy photo memory card to CD/DVD burning program? For non technie user. The kind of system you see in a camera shop / store, e.g. in the UK, Jessops and Boots stores. This is for my Dad who is adept at general PC usage as a notebook owner, but would prefer something fairly simple. The task of burning photos to CD/DVD, in their original photo file .jpg form, i.e. NOT as CD or DVD video or slideshow, is what I'm looking for. I'm guessing this might be possible in Picasa, but all the options available might be superfluous and confusing. He could probably learn to use that but thought I would try simpler options first. Looking for something that guides the user through the steps/stages of the process, 'Wizard' style. Any suggestions? Platform: HP Windows 7 Home notebook with CD/DVD burner and SD memory card slot.

    Read the article

  • How to focus on one topic? [closed]

    - by Brian
    I have a huge problem while reading computer books. Every couple pages I'll end up googling something I want to learn more about, but then I'll find something on that page that I'll want to learn more about and google that (sometimes programming related, sometime hardware related). Normally after wasting around 3 hours going into different subjects I'll return to the original text only to repeat the process a few pages later. Any advice for sticking to one subject and learning that in-depth? I have tons of programming books I've read half-way through since I'll become interested in other languages/topics (not that I'm not interested in the books I've started). Also, what would be worth focusing on in depth? I've gone into Python in the most depth but for classes I'm learning Java and assembly (ARM and Motorola 68000). Also, I've taken a class on C++. Lately I've been spending most of my time learning about Linux instead of programming though. I'm not sure what would be worth focusing on the most to get a job. In other words, how can you focus on one topic and not let curiosity about everything else get in the way? Thanks in advance, Brian

    Read the article

  • windows 2000 freezing during large disk write

    - by robert
    We have a windows 2000 sp4 server which freezes up for about 1 minutes while its web-app does a ~500mb write operation. I can see the webapp start to do I/O activity (through process explorer) then the RDP session becomes unresponsive, you can click on windows and buttons but nothing happens. When the disk write finally finishes the session 'catches up' on all the mouse clicks you did during the freeze in a mad flurry of window activity and the server returns to normal. During the freeze the web-app stops as well. The same behaviour happens on the console of the server. (so I know its not a network thing) Nothing appears in the Event logs. Its like nothing happened. I have upgraded all the HP hardware drivers to the latest proliant support pack. And also run a HP hardware diagnostics which found nothing wrong. What would cause a disk write to lock the rest of the OS?

    Read the article

  • Automation Question using VMWare Workstation

    - by James K
    I'm running an experiment that requires me to create 100 instances of Windows XP w/SP3 and saving each VM instance off to a hard drive. I have to annotate the time that the VM load starts (starting my timer when I see the "Setup is preparing...") until the load ends when I see the final desktop after VM loads its drivers. I also have to annotate the host start and stop time. Is there any way this process can be automated? Each load runs me about 16:00 minutes and gets real tiresome after a time. BTW... Exact timing is not necessary, eyeballing as described above is sufficient for my testing needs.

    Read the article

  • Ubuntu hang, and cannot be soft-reset after resuming from stand by mode

    - by Phuong Nguyen
    I have downgrade my xorg drivers, so I can hibernate and stand by my ubuntu smoothly. However, in some case, there's a problem. My ubuntu get hang. When I tried to switch to console mode (Ctrl+Alt+F1), then I cannot login. The system always reply with an error whenever I tried to press a key. When I press Ctrl+Alt+Del to perform a softreset, here what's it said: [80141.320122] end_request: I/O error, dev sda, sector 193687181 init: control-alt-delete main process (5660) terminated with status 2. This error is not even recorded in syslog. I guess this should be a problem with my hard disk since it said something about a bad sector. Exactly, what kind of error is this?

    Read the article

  • Mac OSX snow leapord move to folder on keystroke

    - by Georges Oates Larsen
    On a weekly basis I have to organize thousands of photos (in groups of up to five thousand) into folders depending upon what they contain (to then narrow them down to the best photos of the same thing). This means I am constantly scanning through photos and organizing them into a folder. THe problem is, the process of stopping my scan, then dragging the photo all the way into a folder myself is bogging me down. Would it be possible to, for instance using something like applescript, or even going so far as using XCode/Cocoa, to create a shortcut that moves whatever I have selected in the finder to a pre-specified folder? Does somethign like this already exist?

    Read the article

  • fdisk -l only displays boot partition

    - by Franklin
    I have a SAN, and it's able to read and write to the 50TB RAID just fine, but when I run fdisk -l it only lists the boot partition of the SAN server, and doesn't display anything about the other partitions on the RAID. I've also tried using parted -l with the same result. Now when I type mount it shows that the partitions are mounted just fine. I've never seen this happen. The box is running Openfiler 2.3 (I know it's old, we're in the process of upgrading all our old equipment). We have another SAN that's configured almost identically, and it's able to display the partition info with either of the two commands I mentioned above.

    Read the article

  • How to fix /etc/ folder on Mac OS X

    - by justinhj
    I was following a tutorial which had this command to create a launchd.conf file in /etc/ sudo echo "some command" /etc/launchd.conf But it wouldn't work, I got permission denied after entering my admin password. So it seemed like the permissions for the link were wrong, so I did 'sudo chmod 755 /etc/' But now I can't load a terminal, I get the error The administrator has set your shell to an illegal value If I tried to sudo a command now I get sudo: can't open /private/etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting Process tramp/sudo root@localhost exited abnormally with code 1 This is what the link /etc looks like, what should it look like, and how do I restore it? lrwxr-xr-x 1 root wheel 11 Jul 21 2011 etc - private/etc /private/etc ... drw-r--r-- 111 root wheel 3774 Mar 26 02:25 etc edit: I'm using Mac OS X 10.7.3

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • How can I insert bullet point data into Microsoft Excel spreadsheet?

    - by REACHUS
    Sometimes when I make some research, I gather data that should be presented in bullet points, preferably in a single cell (as it is kind of data I would not process in any way in the future). I am looking for a way to make it readable for other people using the spreadsheet (on the screen, as well as when they print the spreadsheet). I would like to make something like that: ———————————————————— | * bullet point 1 | | * bullet point 2 | | * bullet point 3 | ———————————————————— So far the only solution is to edit something presented above in a text editor and then paste it to Excel (as I cannot really make bullet points in a single cell). Is there any better solution?

    Read the article

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Memory limit on PHP + Apache + Windows 32 bit?

    - by thkala
    I am considering using Apache 32-bit for a Moodle installation on a Windows 2008 R2 64-bit/16GB server. Since the available memory affects the number of concurrent of users that can be served, I was wondering how the 2GB memory limit on 32-bit Windows processes affects Apache+PHP. Is it a collective limit for the whole server, or is it applied separately for each Apache child process/thread? If it is separate, how many of those children are launched on Windows? One per request? One per processor core? Something in the middle? Is this somehow configurable?

    Read the article

  • Why can't gif images copy at a reasonable speed on this dell laptop with XP?

    - by alt234
    I've got this somewhat old Dell Latitude D810. Strangest thing... If I try to copy anything that has gif files in it the gif files take forever. Like a few minutes per gif regardless of size. Everything else copies fine. I notice this when copying files off our network, copying off multiple external drives, and even when files are copying during an installation process. I'm on Windows XP Pro service pack 3. I've never seen anything like this before. Anyone else?

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Behaviour of nginx as proxy

    - by HD
    I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly: I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Ubuntu 12.04 and Nvidia GTX 550 Ti

    - by Jim
    OK I'm currently trying to install Ubuntu x64 Server 12.04 onto the following machine: Intel Core i5-2320 3.0GHz LGA1155 6MB 8 GB DDR3 RAM, Gigabyte Z68P-DS3 S1155 Intel Z68 DDR3 ATX M/B, OCZ 60 GB SSD, 3x Samsung 2TB drives in a RAID 5 array (via M/B) Now what I think is causing the issue is the following: EVGA GeForce GTX 550 Ti 951MHz 1GB PCI-Express HDMI FPB As the server CD works in text mode I haven't had a problem with actually installing Ubuntu. Partioned with: 1GB /boot SSD, 59GB / SSD, 10GB swap RAID5, ~4TB /home RAID5 On a straight boot, you briefly see the GRUB menu, followed by a blank screen. The keyboard and mouse blink as they are initialised but no sign of life from the screen. Followed by a bit of research (otherwise known as google) ... Booted in quiet splash nomodeset Now I have a fully working linux distro at the command prompt. I then proceed to try and update the nvidia drivers with apt-get (after updating repositories etc) and rebooting. Still the same problem. I also tried reinstalling from the CD and installing said drivers in the install process before GRUB was installed, still the same symptoms. Does anybody have any solutions? I'm at my wits end here, I bought this machine to be a linux server / tinkering machine and have just spent 4-5 hours trying to just get a basic install working.

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • Indenting an x number of lines in vim

    - by Mack Stump
    I've been coding in Java for a job recently and I've noticed that I'll write some code and then determine that I need to wrap the code in a try/catch block. I've just been moving to the beginning of a line and adding a tab. 0 i <tab> <esc> k (repeat process until at beginning or end of block) Now this was fine the first three or four times I had to indent but now it's just become tedious and I'm a lazy person. Could someone suggest an easier way I could deal with this problem?

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >