Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 551/710 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • I can't download or stream for more then 3 sec, and then the conection activity just dies

    - by JMHein
    Just got a new internet connection installed at my sisters place, but it randomly just stops working. At first it was only affecting flash videos. they would randomly just stop buffering. I did a lot of research on this and found that there can be many things that cause this exact trouble. I then tried IE and some flash would stream fine, but still random deaths. So I told my brother in law to reset the router and modem and that fixed the problem for them but not my laptop. I then started trying to fix the flash problem only to fined that downloads of any kind were affected. Now it is so bad that 50% of page loads will never finish because the connection drops to 0% usage with in a split sec. I can't get flash reinstalled because the installer is trying to download but the download dies at 8% I tried up loading a large file by FTP to a web server with no troubles. Yet any activity on my end that takes longer then about 1 sec to finish, just never finishes I can watch the network log in the taskmanager and it spikes for ruffly one sec then drops back to zero and when I go back to the web page it says it is still loading and no matter how long I let it sit it never does any thing more till I reload then it will again create a very short spike of activity on the connection and then drop to zero. Also if I start a download and it does drop off I can restart the download where it left off and get up to 100Kb/s for around the same one sec then it drops to around 14Kb/s then zero a sec latter... I am running Win 7 home prem x64 with FF11 and IE8 I have simply tried every thing I can short of calling up the ISP which very likely will get me no where fast. any advice on what step to take to figure this out would be nice. I am not even sure it is not just an ISP problem. (at least I should be able to get flash reinstalled once I get back home)

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Configuring suExec to work with Apache and PHP via FastCGI

    - by RandomPsychology
    I have installed ISPConfig 3 on an Ubuntu VPS and configured it for Apache + PHP via FastCGI and suexec. I am able to upload PHP apps (e.g. Wordpress) and run them normally w/ suexec. However, for some reason the PHP scripts cannot write data to disk. For instance, trying to upgrade a plugin via Wordpress' web interface causes it to fail with the error "Could not create directory /path/to/wp-content/upgrade/plugin.tmp." Trying to upload media and other assets also fails via the web. I've checked owner/group on the directory structure and it looks good. The suExec log also seems to be normal and I don't see any indicative errors in the web server logs. I can also confirm that changing the owner/group on the directories does result in the expected error in suexec.log. Additionally, I have the directory permissions set to u=rw,g=r,o= and I've also tried setting g=rw. None of this results in my scripts being able to write to the directories. What am I doing wrong?

    Read the article

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • Restoring open software after a restart event in windows

    - by Doltknuckle
    I find that at the end of a long day, I sometimes have a large number of programs running. All which I will need to use tomorrow. Normally, this isn't an issue, I can simply lock the machine and come back tomorrow. My problem arrises when windows update launches in the middle of the night and force restarts my computer. That in turns closes all my open software. I of course save everything regularly so I don't loose anything, but I waste time reopening all of those resources whenever there is a restart. [EDIT] I should clarify that I still want to be able to restart my computer when an update comes down. Preventing the restart only delays the problem until later. I should have been more specific in that I want to be able to recover my working environment after a restart for any reason. Things like scheduled maintence, power loss, updates, and software installs. [EDIT] I can't simply have them setup to launch at startup becasuse those files change from week to week. So I need something that monitors what I have open, and gives me the option to "recover" those software sessions when I log back in. Anyone have any suggestions on what I can do? I'd even be willing to purchase software to do this for me if that is the only option. Thanks

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • Throttle CPU Usage consumed by Process

    - by Brett Powell
    We run a game-server company where we basically have large amounts of customers sharing a single machine, and are just on their own instance of a Java Process (Minecraft) managed by our Web Control Panels. In the last few game updates released, we have noticed that many of the third-party plugins our customer's use have become poorly written and we are frequently seeing huge CPU increases from certain servers until we manually kill the process. Our Game Panel automatically restarts processes, so killing them is not really an issue. Our problem is that once once of these servers starts consuming 50%+ CPU Usage, it takes atleast 5 minutes to RDP into the machine, locate who it belongs to, shut it down and notify them. Are there any current solutions for Server 2008 which allow for the throttling of CPU usage or worst case, just auto kill a process stuck using that much? As Minecraft is essentially a single-threaded application, we have investigated using Affinity, although with the variations in our Packages and fluctuations in usage, this doesn't work well for us. Some option to throttle the maximum usage a process can use would be perfect, or at least the option to kill a process using that much. Thanks!

    Read the article

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • UPS with a HP Proliant server

    - by Groo
    We placed a EATON Ellipse Max 1500 (900W) as the UPS for our HP Proliant ML350 G6. Upon first power failure (actually we only moved the UPS' input plug to a different socket), server immediatelly turned off, and the Health LED turned red and started blinking. UPS was in operation for about a week before that, with battery fully charged to 100%. Since our server's hot-plug supply is 460W, we are pretty sure we haven't overloaded it, the server was completely idle at that time (no web or win apps running except Windows Server core services). Then we tried to do the same with a different, no-name older PC (Core 2 Duo, 2Gb RAM) with a generic power supply (not sure what the power is) and it continued working when we pulled the plug out. UPS load was less than 15% (measured in the provided Eaton utility). We measured the UPS' output voltage using a smart oscilloscope and the THD of the UPS output waveform turned out to be 40%. Did you have similar experiences? Could this be a faulty UPS? Or a faulty power supply? Or some HP sensors configured to trigger too strictly? I wouldn't like replacing this UPS with the same brand, to get same results. [Edit] I also tried to do this while the server is turned off. While the UPS is working on battery, server will not start - as soon as I press the power button, Health LED starts blinking red.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Confused about the Windows 7 Preinstallation Kit

    - by David Brown
    I build custom PCs and would like to use the Windows 7 Preinstallation Kit to make installation go a little quicker and customize the Windows image. However, since each PC is built to a particular customer's specifications, the hardware will rarely be the same. So, I would like to have a single answer file that will work for everything. I'm not sure if that's possible, however. What I mostly want to do for now is add my support information as well as pre-set anything that I would normally change after each installation completes. I have a Windows 7 Professional Upgrade DVD set (both 32-bit and 64-bit), but no OEM disks. I copied the Install.wim file to my local drive and opened it in the Windows System Image Manager, but it asks me to choose a catalog file specifically for each edition of Windows 7. Will this limit the answer file to whichever edition I choose? I would think choosing Starter would give me the most basic settings, which would apply to all other editions, but I'm not entirely sure of this. I don't intend to install any extra applications or drivers. I merely want to insert an OEM disk, my OPK USB drive, and have it work for whatever edition of Windows 7 I'm installing. If a large number of similarly-configured PCs need to be built, I'll go ahead and create a custom answer file in that case, but for a single machine order, that seems like overkill. In addition, do I need a separate answer file for 32-bit and 64-bit versions of Windows 7? Or will it work for both, even though I copied the Install.wim file from the 32-bit disk? Thanks!

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • Linux: Managing users, groups and applications

    - by RN
    I am fairly new to linux admin so this may sound quite a noob question. I have a VPS account with a root access I need to install Tomcat, Java on it and later other open source applications as well. Installation for all of these is as simple as unzipping the .gz in a folder. My questions are A) Where should I keep all these programs? In Windows, I typically have a folder called programs under c:\ where I unzip all applications. I plan to have something similar here as well. Currently, I have all these under apps folder under/root- which I am guessing is a bad idea B) To what group should Tom belong to ? I would need a user - say Tom who can simply execute these programs. Do I need to create a new group? or just add Tom to some existing group ? C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them? I mean an alternate way would be to use Yup or RPM or something like that to install these applications. Given my familiarity and (tight budget) that seems too much to me. I feel uncomfortable running commands which i don't understand too well

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Is this SPF record correct for me?

    - by DT
    I'm completely new to Stack Overflow, so Hi! I need to add an SPF record to my site "main.com" (not the real address) to allow an email publishing company "emailpublishers.com" (not the real address) to send emails on my behalf. However, I'm nervous about adding an SPF record because of the havoc it could wreak if done incorrectly. I use Google Apps. I also use "auxiliary.com" to send mail from "main.com." And, of course, I use "main.com" to send mail as well. "auxiliary.com" doesn't have an SPF record. I used Microsofts' and OpenSPF's wizards to generate the following SPF entry. Does it seem to be correct for me? "v=spf1 a mx ip4:55.55.555.55 mx:alt1.aspmx.l.google.com mx:alt2.aspmx.l.google.com mx:aspmx.l.google.com mx:aspmx2.googlemail.com mx:aspmx3.googlemail.com mx:aspmx4.googlemail.com mx:aspmx5.googlemail.com a:auxiliary.com include:_spf.google.com include:auxiliary.com mx:auxiliary.com include:emailpublishers.com mx:emailpublishers.com ~all" However, my host MediaTemple says in a knowledge base article to use: v=spf1 a:main.com/20 ~all So that added to my confusion. Thanks a lot!

    Read the article

  • Launch synergy client on boot in Mac OS X

    - by Herms
    I have a mac as a secondary machine at work. Currently I use synergy on my main machine to share its keyboard and mouse with the mac. I created a launch agent for my user to launch synergy when I log in, and that's working. However, this means I still have to pull out the mac's keyboard and mouse in order to log in. I tried making a user daemon so that it would launch on boot, but I get the following errors in the console: LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Warning>: 3891612: (CGSLookupServerRootPort) Untrusted apps are not allowed to connect to or launch Window Server before login. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : On-demand launch of the Window Server is allowed for root user only. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : Set a breakpoint at CGErrorBreakpoint() to catch errors as they are returned LaunchSynergy[52] _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. Is there a way to get this to work? Looks like the Mac's security doesn't want to allow anything to take control of the window while at the login screen. I can understand that, but I'd like a way to override it, as it would make my life a lot easier.

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >