Search Results

Search found 6511 results on 261 pages for 'everyday usage'.

Page 149/261 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • How to meet Windows 8 upgrade's 20 GB requirement on a 40 GB SSD with a 22 GB Windows 7 install?

    - by deryus
    A PC I have has Windows 7 installed on a 40 GB SSD, and I bought a Windows 8 upgrade for it. The current Windows folder on it however is 22 GB, that's after removing hibernation, turning off the pagefile and removing all extra programs/features. So even if I purge every other file and folder, the Windows folder itself takes more than half the disk. The PC also has a 1 TB HDD, but the upgrade installer didn't give me any options about choosing another drive. So, is my only option to reinstall Windows 7 on a larger drive, then proceed with the Windows 8 upgrade? Or is there anything I can remove from the Windows folder that while might be dangerous for long term usage, is fine for the few minutes I need to get Windows 8 installing?

    Read the article

  • Lightest way to run Google Hangouts Chrome app on Mac

    - by jadengore
    I recently transitioned to Safari because I'm really tired of how Chrome hogs memory and drains my battery like crazy. The only thing that has been keeping the Chrome icon open is the Hangouts plugin. Basically, I am looking for the lightest way to run Hangouts on my Mac. By light, I mean the least amount of RAM usage, and preferably a way to do it without Chrome open/light version of Chrome that only opens extensions. Any suggestions? EDIT: Another thing I noticed was that Hangouts ignores your default browser if links are sent to you by chat, and when clicked they open in Chrome. My question doesn't relate to this at all, but I found it interesting...

    Read the article

  • Any cloud storage service that lets us to authenticate the file when we serve the file to our visito

    - by TORr0t
    Lets say, i want to restrict a file to my visitors. I mean , i have an xx.avi file to be streamed/downloaded, and the visitor paid me for the bandwidth and the size of the file. In amazon s3, i cant control the file at all .(there is a very basic control thing which is not ok for me) Only way is my server can proxy the file, like it fetches the file from amazon s3 storagenode and send it to the owner with authentication approval by a php script. But this way i would double up the bandwidth usage and again there would be latency problem since my server needs to get the file from amazon s3. So i was wondering if there is a better solution or any cloud storage service that lets us to control the file restriction to my visitors. Thanks

    Read the article

  • Having trouble searching for a ‘.’ using htaccess.

    - by ThisLanham
    I'm setting up a website that (ideally) would allow users to access other users' homepages with a url in the format "www.mysite.com/ThisLanham" where 'ThisLanham' is the username. The username begins with a letter and can consists of any alphanumeric character along with an underscore, hyphen, or period. So far, the redirection has worked perfectly when I ignore usage of the period character. The following code handles that request: RewriteRule ^([a-zA-Z][0-9a-zA-Z-_]*)/?$ Page/?un=$1 [NC,L] However, I've tried a number of ways it check for the period as well, but all have resulted in a 500 Internal Server Error. Here are some my attempts: RewriteRule ^([a-zA-Z][0-9a-zA-Z-\_\\.]\*)/?$ Page/?un=$1 [NC,L] RewriteRule ^([0-9a-zA-Z-\_\\.]\*)/?$ Page/?un=$1 [NC,L] RewriteRule ^([a-zA-Z].\*)/?$ Page/?un=$1 [NC,L] RewriteRule ^(.\*)/?$ Page/?un=$1 [NC,L] also tried... RewriteCond $1 != index.php RewriteRule ^([a-z][0-9a-z-_.]*)/?$ Page/?un=$1 [NC,L] My backup plan is to no longer allow users to include periods in their usernames, but I'd much rather find a solution. Any ideas?

    Read the article

  • Server Sizing Methodology

    - by adbrpc
    Our development environment consist of JBoss 5.0.1 DB Server, SQL Server 2008, Oracle IDM. Hardware is Win 2008 32 bit, 4GB RAM. We have reached stage where our environment can not handle application resulting in JBoss shut down throwing out of memory errors and CPU reaching to 90% usage. I am looking methodology to calculate correct server sizing where I input TPS, max number of concurrent users, max CPU utilization etc.. to give me number of servers, RAM size, number of cores. I am expecting application to grow 10% annually. Load Balancer and Failover should also be taken in account while sizing.

    Read the article

  • elastix cdr stop working

    - by dreddko
    CDR was working before 19 march. Unfortunately i dont remember what kind of changes i made to configuration, but this exactly not changes to CDR config. elastix 2.4.0 asterisk 11.7.0 mysql 5.0.95 elastix*CLI> cdr show status Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple /etc/asterisk/cdr.conf [general] enable=yes unanswered = yes /etc/asterisk/cdr_mysql.conf [global] hostname = localhost dbname=asteriskcdrdb password = *MYPASSWROD* user = asteriskcdruser userfield=1 ;port=3306 ;sock=/tmp/mysql.sock loguniqueid=yes mysql> SHOW GRANTS FOR 'asteriskcdruser'@'localhost'; +-----------------------------------------------------------------------------------------------+ | Grants for asteriskcdruser@localhost | +-----------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'asteriskcdruser'@'localhost' IDENTIFIED BY PASSWORD 'HASHHERE' | | GRANT ALL PRIVILEGES ON `asteriskcdrdb`.* TO 'asteriskcdruser'@'localhost' | +-----------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • xen virtual machines get to many porcentages of cpus

    - by ki0
    Hi everyone, This is my question. I have one Xen server with 8 CPU's and 6 virtual machine running, each virtual hard disk are running in diferent physical hardisk. Everything worked fine but sometimes one virtual machine get almost whole CPU, if the Domain-0 is 90% that is normal, the virtual machine is 500% usage of CPU. I have improved that it is not depends who are working with the VM even when nobody are working with the server this still happens. I dont know what happen. Anyone have any idea?¿ or anyone have happened the same?¿

    Read the article

  • BackupPC back-up has not been finished in 12 hours(!)

    - by chronoz
    I installed BackupPC toda on a server and set it to do a back-up 12 hours ago... while it's been backing up since, it seems very very slow and it's not completed yet. It's just backing up a testserver with a total disk usage of 1.8GB. What could cause the back-up process to be so slow? rsnapshot always worked wonderfully fast, but I want to improve my back-up solution. df shows that the size on the back-up disk is actually still increasing.

    Read the article

  • Resolve local subdomain on apache for paths within user dir

    - by MaoPU
    On Apache 2.2.x I've activated mod_userdir. I used the default setup, so that http://localhost/~name/ will be connect with ~name/public_html/ and a path within public_html, e.g. ~name/public_html/mySite can be reached through http://localhost/~name/mySite. How can I achieve, that the same path can be reached through http://mySite.name.localhost/? I don't want a manual approach like it is suggested in other SF questions (such as http://serverfault.com/q/133921/53624), but rather want an automatic mapping of all available paths to the corresponding URL. I think, several steps will need to be taken: Change mod_userdir configuration, so that the subdomain of localhost will be connected with all available user names on the machine. The second step would maybe include the usage of mod_rewrite, so that the subsubdomain could be matched to the path within ~name/public_html... What would be your prefered way?

    Read the article

  • Is it safe to use up all memory on linux server, not leaving anything for the cache?

    - by Temnovit
    I have a CentOS server fully dedicated to MySQL 5.5 (with innodb tables mostly). Server has 32 GB RAM, SSD disks, and avarage memory usage looks like this: So about 25GB is in use and about 6.5GB is cached. I am experiencing performance problems with WRITE queries, so I was thinking, is this the optimal cache size? I might increase innodb buffer size, so that linux cache would become smaller, or decrease it, so it would be bigger. What is the optimal used/cached memory balance for busy MySQL server on linux?

    Read the article

  • How can I fix a computer that is infested with malware and is extremely unresponsive? [closed]

    - by fredley
    Possible Duplicate: How do I get rid of malicious spyware, malware, viruses or rootkits from my PC? I'm troubleshooting a Windows 7 PC for a friend. A couple of days ago it started running 'slow'. It turns out 'slow' is about 15 minutes to the first glimpse of the desktop, and another 30 to show icons. It is possible to open Task Manager, and nothing seems awry, CPU usage at 1-5%, plenty of memory free. The machine is clearly infested with malware though, in particular a program called 'Optimizer Pro' is demanding money to 'remove 5102 files slowing down my computer'. This seems highly suspicious. My problem is though, that I can't access msconfig (I left it for a couple of hours after having hopefully typed it into the Start Menu and hit enter - nothing seems to have loaded), or anything at all basically. I can boot from a Linux Live CD, but can I actually do anything useful from there? System Restore hasn't fixed it either, and Safe Mode exhibits the same behavior.

    Read the article

  • mysql wont stop, mysqld_safe appeared in top

    - by power4
    my server (CentOS) contains lots of website, which collect data from lots of sources with cron. the mysql config is the default recently, PHP failed to communicate with mysql. Firstly I just restart the server but after restarted, PHP still failed to communicate with mysql I've tried: ps ax | grep mysql Then run: kill -9 #### (I've also tried killall -9 ####) - this failed, ps ax | grep mysql showing the killed process id is still there service mysqld start (I've also tried /etc/init.d/mysqld start) - I got reply Timeout error occurred trying to start MySQL Daemon. when run top, mysqld_safe is appeared on top with about 50% of CPU usage. I dont know the size of all the database. I really confused

    Read the article

  • A very interesting MYSQL problem (related to indexing, million records, algorithm.)

    - by terence410
    This problem is pretty hard to describe and therefore difficult to search the answer. I hope some expert could share you opinions on that. I have a table with around 1 million of records. The table structure is similar to something like this: items{ uid (primary key, bigint, 15) updated (indexed, int, 11) enabled (indexed, tinyint, 1) } The scenario is like this. I have to select all of the records everyday and do some processing. It takes around 3 second to handle each item. I have written a PHP script to fetch 200 items each time using the following. select * from items where updated unix_timestamp(now()) - 86400 and enabled = 1 limit 200; I will then update the "updated" field of the selected items to make sure that it wont' be selected again within one day. The selected query is something like that. update items set updated = unix_timestamp(now()) where uid in (1,2,3,4,...); Then, the PHP will continue to run and process the data which doesn't require any MYSQL connection anymore. Since I have million records and each record take 3 seconds to process, it's definitely impossible to do it sequentially. Therefore, I will execute the PHP in every 10 seconds. However, as time goes by and the table growth, the select getting much slower. Sometimes, it take more than 100 seconds to run! Do you guys have any suggestion how may I solve this problem?

    Read the article

  • Can anyone recommend free network monitoring tools?

    - by Josamoto
    Hi all I have a machine at home on a 3G internet connection, and my PC is consuming approximately about 200MB per day in uploads & downloads. I'm using an application called Networx to monitor the upload/download usage, but I am on the outlook for the culprit application munching away on my cap. Networx tells me how much my connection uses in total, and I need to know how much each application uploaded / downloaded. Does anyone know of a network connection monitoring utility for Windows 7 that can give me a daily outline of how much data was uploaded & downloaded, and specify this on a PER APPLICATION basis. Thanks in advance!

    Read the article

  • Can anyone recommend free network monitoring tools?

    - by Josamoto
    Hi all I have a machine at home on a 3G internet connection, and my PC is consuming approximately about 200MB per day in uploads & downloads. I'm using an application called Networx to monitor the upload/download usage, but I am on the outlook for the culprit application munching away on my cap. Networx tells me how much my connection uses in total, and I need to know how much each application uploaded / downloaded. Does anyone know of a network connection monitoring utility for Windows 7 that can give me a daily outline of how much data was uploaded & downloaded, and specify this on a PER APPLICATION basis. Thanks in advance!

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Measuring custom statistics with sar

    - by Will Glass
    I have a server application which I think is leaking file handles. I want to track the usage of file descriptors over time on my Linux (ubuntu) server. I've figured out that I can track the number of file descriptors in use by a process with lsof -p `pgrep the-process-name` | wc -l Since I'm already using sysstat and sar to track various metrics, I thought it'd be nice to display with sar. I want to measure this every 10 minutes. Is it possible to add a custom metric to sar? Then I can easily report it out. If not, I'll write a simple cron job to collect this data and store it separately in a log file.

    Read the article

  • SSH logins failing before success

    - by Vincent
    I am running Ubuntu 12.04 Server, updated, to run a webserver on Tomcat 7. I have about 1000 clients that are very very often using an RSYNC program to sync some file with this server. Those RSync are using SSH with a certain user to open connections on the server. The result is that my server is, as normal, full of connections by the same user. About 5 connections per 1 second every day any time. Then, when I try to open a regular SSH connection with my Putty client, the connection fails before login saying "Server unexpectedly closed network connection", about 6 times for 10 attemps, anbd for 4 attemps out of 10, it works normally and I am able to login as any user. Is there a overload of connections here? The server statistics are very calm saying less then 40% of network usage and less of 2% CPU. How can I improve this? Thank you for any help. V.

    Read the article

  • Use test to check for condition with find and execdir option

    - by slosd
    I think I can keep my question short. Why does the following command produce no output? find /usr/share/themes -mindepth 1 -maxdepth 1 -type d -execdir test -d {}/gnome-shell \; I expected it to print all folders in /usr/share/themes that contain a folder gnome-shell. Several websites suggest that this usage of test as a command in exec/execdir is possible. From man find: -exec command ; Execute command; true if 0 status is returned. [...]

    Read the article

  • WIndows Hosted Network

    - by Nandakumar V
    I have created a hosted network in my windows7 system. The netsh wlan show hostednetwork command gives the output Hosted network settings ----------------------- Mode : Allowed SSID name : "rambo" Max number of clients : 100 Authentication : WPA2-Personal Cipher : CCMP Hosted network status --------------------- Status : Started BSSID : xx:xx:xx:xx:xx:xx Radio type : 802.11n Channel : 11 Number of clients : 1 xx:xx:xx:xx:xx:xx Authenticated But I have forgot the password for this connection and after some googling I found the command netsh wlan refresh hostednetwork YourNewNetworkPassword. But on executing this command it get the error C:\Users\user>netsh wlan refresh hostednetwork rambo123 Invalid value "rambo123" for command option "data". Usage: refresh hostednetwork [data=]key I have no idea what is wrong with this command.

    Read the article

  • peer to peer disk image transmission

    - by JackWu
    Installing linux/windows through pxe works smoothly for me. But downloading images(especially windows) is a headache. Let alone the time, bandwith usage is horrible. And p2p tech comes to my mind. But I have no clue how it works or where to start. Anyone knows how to setup p2p local network, and applies that on image transmission? Any advice, tutorials or experiences will be great. Thanks in advance.

    Read the article

  • Excel VLOOKUP using results from a formula as the lookup value [on hold]

    - by Rick Deemer
    I have a cell that I must remove the first 2 characters "RO" for each value in a column on a sheet called RAW DATA and put into a cell on a sheet called ROSS DATA. Some of the values in that cell have 3 digits after the "RO", and some have 5 digits. To do that I used =REPLACE('RAW DATA'!A3,1,2,"") Then I need to use this new resultant string as the lookup value in a VLOOKUP. The VLOOKUP will be looking at a named range called DAP on a sheet called DAP, in column 5 for an exact match, and I need it to return that value to the cell. I have tried using INDIRECT in different ways to no avail, and I'm not sure that I fully understand its usage. So at this point I am Googling for a method to do this and at a standstill.

    Read the article

  • Mac OS X: What is using my 'active' memory?

    - by badkitteh
    Hello fellas, I'm using a recent MacBook Pro with 8 GB of RAM and after a few hours of using it at work I notice the amount of 'active' memory growing and growing. Whenever I reboot my Mac, everything looks fine and it is hardly using any RAM. But after a few hours it looks like this: As you can see, in this case it's about 4.3 GB. Being a developer, I know that 'active memory' is the amount of memory that is currently used by running processes. So the first thing I did was quitting all applications and killing all processes that don't seem to belong to Mac OS X. After I did that, my active memory came down about 400 MB, but got stuck at what you see in the screenshot. There are no more processes or applications to quit. Now I'm wondering what is actually holding on to the memory? top and Activity Monitor don't report any processes with a high memory usage. Any ideas? Thanks!

    Read the article

  • Do control groups improve system performances?

    - by qdii
    According to this website, enabling cgroups in the kernel can boost performances by sharing resources in a better way. In particular, the conclusion states that:  Nevertheless, with a little trial and error, cgroups can help you improve the efficiency of your systems’ resource usage and avoid downtime due to overusage of a single service. Kernel seeds, however, recommend to deactivate them altogether. They say: Consider these [kernel] settings poison. They remain nothing but system slow-downs. They are all off by default [in the proposed kernel config file]. Who should I trust?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >