Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 303/456 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • Number of routers in small community lock up and require reboot.

    - by Anthony Hiscox
    I live in a small town which has one primary ISP. Lately I have noticed that a number of wireless routers have been locking up and requiring a reboot before allowing any connections. This has affected two of my routers, my work router, and a few others. In all cases wired continued to function as usual. Often wireless clients can see the SSID but simply won't connect. I can only think of a few possibilities and was hoping someone here might be able to point me in the right direction: Our ISP is well known to be flaky, something they are doing is causing this, what that might be I have no clue it as seems to affect the wireless only. There's a power issue in town, given our remote location and reputation for crap electrical, this seems reasonable. Only one router was plugged in to a UPS, and I'm not sure of the quality. There is some bug in all the different firmware for every one of these routers (all different). That doesn't seem reasonable, unless; it's an unknown (or known) exploit or DoS of some sort being launched by a massive team of ninjas hell bent on forcing us all to be tethered to our walls by ethernet cables or; it's just been a coincidence and I'm just paranoid (this has some weight, I mean read 4 again). Anyone else experience similar issues and have some tips?

    Read the article

  • Contacts in Outlook 2003/2007, some questions

    - by Ernst
    If I create a distribution list and then select members, can I see different fields than the default ones? In 2007 there are radio buttons for 'name only' and 'more columns', but the latter does seem to only result in no results at all, regardless of which address book I choose. In 2003 there is no such thing. Is there a plug in that will break up the recipients (whether they be to, cc, or bcc) in groups of X, and send then a number of mails as required? Our host allows only 50 recipients per mail and only 300 total recipients per 5 minutes. I know the email client blat has exactly this functionality, but it does not seem to be able to connect to the exchange server to get the contacts needed. Could I maybe set outlook to send to blat which then does the breaking up as necessary? Can I (or is there a plug in for this) export only part of the contacts instead of all of them? Note that we send mail outside our organisation via our web host where we've got a few mailboxes, and we use our exchange (2000) server only internally, the few people that can send email to the outside world have an external mailbox as well as their exchange account defined. I might be able to convince our general boss that we can simply give (some) people the ability to send outside via exchange, but I might just as well not succeed. Alternatively, is there another program that can connect to exchange to get the contacts (selected based on categories) and then send via smtp in groups with delays between the mails?

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Two Firefox windows vs two browsers? Ram Consumption

    - by Kayle
    I don't know enough about Ram & sharing to know what the difference is here. Normally, I run Chrome in one desktop for personal use, and Firefox on a second desktop for business. I like the separation of saved passwords and whatnot. However, I recently learned that I can open two different profiles in Firefox at the same time, so I was wondering if that would be cheaper to my system resources, or not? Out the door, I don't think it would save more than 40-60mb of ram... but I'm wondering, 3 hours later, if ram handling will be better using just one browser for all my heavy lifting. I only have 2gb of ram and I run iTunes and Photoshop as well, almost all day. So I like to save ram where I can. Any thoughts? UPDATE: I've been centering around chrome more recently and using firefox for testing. Dev isn't bad on Chrome and it's great at releasing memory when I close tabs. In retrospect, I think the best answer to this question is simply for me to buy another 2gb of ram.

    Read the article

  • Unable to sync iPod Touch with my PC

    - by alex
    I'm trying to sync a first gen iPod Touch to my PC running Windows 7 64 bit. The problem is that whenever I connect the iPod, iTunes completely freezes (if I start iTunes after connecting the iPod it will simply hang until it's physically disconnected from the PC). I reinstalled iTunes thinking that it had been corrupted, but without any luck. I've had this problem with all the latest versions of iTunes. I've also tried using MediaMonkey and DoubleTwist. None of these apps see the iPod as being connected; DoubleTwist also freezes, just like iTunes. The really strange thing is that I was able to sync the iPod with this PC a while back, but I now seem to have lost that ability. I don't know what changed. Windows detects the device every time it's plugged in (I can see it in Device Manager and I can browse all photos on iPod as if it were a camera). Also, I can sync it to iTunes on Mac OS X without any major problems.

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Dell Latitude D510 Runs From Battery But Not AC Adapter

    - by Jason George
    I have a Dell Latitude D510 that went belly up around two years ago. It will run from the battery, however, the wall adapter will neither power the machine nor charge the battery. Once the battery is dead, the machine is dead. Since it died I've searched repeatedly for solutions. I've tried a new AC adapter and even removed and replaced the DC jack thinking one of the solder joints might be bad. Both to no avail. After two years of searching I finally found the answer today. Since it's such a simple fix and I had such a hard time finding it I wanted to post the info for others (as it is apparently a common issue with the D510). -----SOLUTION----- It seems this is commonly caused by a cracked solder joint at pin 1 on an inductor filter pair (FL2) near the power jack. Pins 1 and 4 are ground and pins 2 and 3 are power. There should be 20V from 1 to 2 and 3. Anything less indicates a cracked joint that is increasing resistance and dropping the supply voltage. Repair simply requires reflowing all four pins with a little added solder for security. Detailed instructions can be found here. Dell Latitude D510 solder problem

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • Windows Server 2008 - inexplicable system time jumps/glitches/inaccuracies

    - by Nathan Ridley
    I'm running a production web server on Windows Server 2008. On this server I have a database which logs certain user actions, but every now and again I inexplicably get database entries which, according to the record ID and the records immediately before and after, have the wrong time logged against them (7 days+ too old). For example, record ID 1001 will be for Dec 7, 11pm, 1002 will be for Dec 7, 11:01pm, then 1003 will be for Nov 28, 1:38am, then the next will be back on track again. The problem seems to occur in random records (or 2-3 records in a row) and crops up once every few days. This is absolutely baffling because there is only one place in the application that assigns this date/time value and it's simply the system UTC date. I have been synchronizing the system time to time-a.nist.gov (which I read in another article was a bit more reliable than the default time.windows.com) and it seems to occasionally get out of time anyway (3-4 minutes), but I'm speculating that occasionally the time server has a temporary glitch where the date changes to a drastically wrong value for a short space of time, then changes back. Either that, or the motherboard clock battery is screwed and the reason the time momentarily changes is that the motherboard loses the time and then the time synchronization puts it back again. Could either of my suspicions be right? Should I turn off time synchronization for a production server? Assigning dates to an event log where the dates are up to 2 weeks prior to the actual date is a severe problem I can't have when the next version of my application is released. Any suggestions or advice would be appreciated.

    Read the article

  • how to set up dual wireless routers (G and N) with single broadband connection

    - by user15973
    A local cafe has an 802.11g wireless router attached to either an 8- or 12Mbs broadband connection. Although many customers still have -g devices, an increasing number are showing up with 802.11n devices (e.g. Mac laptops, iPads). The cafe owner is content with his router's area coverage, but he would like for his customers to be able to take advantage of the higher -n download speeds. He could simply replace the -g router with a -n router, but reportedly -n routers slow down considerably when servicing both -g and -n connections. Another option would be to buy the -n router, run an Ethernet cable between the two routers, and (of course) hook up the broadband connection to one of the two routers. Still another option would be to attach an ordinary, wired switch to the broadband connection and then attach both routers to the switch. Aside from the cost issue, what are the tradeoffs of those approaches? Which would you recommend, and why? Is there a better option than the ones I mentioned? Thank you in advance for your advice.

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • VMware "boot screen" - add info like contact, phone number, etc.?

    - by TheCleaner
    I've tried searching Google and VMware's KB but maybe I'm not typing the right search criteria...only finding ways to fix problems with booting or screen issues. On the default boot screen of a host it looks similar to this picture from GIS: I'm curious if it is possible somehow to make it look like this instead (adding custom details): I know for the most part the info is "useless" since it is administered remotely, etc. But when I deploy standalone hosts to branch offices, it'd be nice for them to see this type of info on the boot screen. I may also include the VMs hosted on it (again on standalone hosts). Normal monitoring, etc. will be done remotely. This is strictly for odd times when the branch contact may say "the electricians are saying they need to turn off this circuit but I have no idea who to call in IT to tell them this box needs to be shut down" or similar. Anyone who has dealt with small branch offices can tell you that if it isn't labeled they easily forget what it is for and will simply say after the incident "I didn't know what it was or who to call." Possible?

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • How do I install php 5.3 on CentOS?

    - by fivelitresofsoda
    Hi, I have to install php5.3 on my centos server. If i do yum install php, the base repo installs 5.1.6 which is too old for the apps i need to install. So i've been trying to use the ius repository, following the official instructions from ius: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm Ok. Now i simply do yum install php53, etc for all i need... but i get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea how to solve this. I think i have to delete the base packages however as a linux noob i don't know how to do that. Please help. Thank you.

    Read the article

  • Control Panel as menu includes a blank item

    - by Matthew Ferreira
    When viewed as a menu attached to the Start Menu in Windows 7 Ultimate x64, the Control Panel contains a blank item. It looks like this: This item cannot be deleted or removed. I also cannot create a shortcut to it. No error message is displayed, instead simply nothing happens. I've tried using Shell Object Editor (using Run as Administrator) to find out if there is an errant entry on the Control Panel, but many entries (almost two dozen) are blank. There are several valid entries as well. I've looked through the registry and through C:\Windows, \system32, and \SysWOW64 but have had no success. I looked at this question, but I am not using Windows XP and thus have no option to use Tweak UI's Rebuild Icons function. Please note that this is no empty entry in the Control Panel when opened normally, only when attached to the Start Menu as a menu. I have compared the list of entries on the attached menu to the normal Control Panel and other than the blank entry, they are exactly the same. Nothing is missing from one or the other. I've also compared the menu and the normal view to reference images and lists of Control Panel items and have found no irregularities. Is anyone familiar with this problem or know of a solution? I've performed virus and malware scans and found nothing. I've used CCleaner with no change. Nothing with Shell Object Editor. Nothing with Registry Editor. Certainly someone here knows how to fix this. My only guess is the many blank entries visible in Shell Object Editor, but I am reluctant to delete that many items without further analysis and guidance. I appreciate your time and consideration.

    Read the article

  • Can start-stop-daemon use environmental variables?

    - by scottburton11
    I need to daemonize a Windows app running in Wine, and create a pid in /var/run. Since it requires an X11 session to run, I need to make sure the $DISPLAY variable is set in the running user's environment. Assuming I already have a X11 session running, with a given display, here's what the start-stop-daemon line looks like in my /etc/init.d script: start-stop-daemon --start --pidfile /var/run/wine-app.pid -m -c myuser -g mygroup -k 002 --exec /home/myuser/.wine/drive_c/Program\ Files/wine-app.exe Unfortunately, my version of start-stop-daemon on Ubuntu 8.04 doesn't have the -e option to set environmental variables. I gather that you could simply set $DISPLAY before the command, like so: VAR1="Value" start-stop-daemon ... But it doesn't work. Since I'm using the -c {user} option to run as a specific user, I'm guessing there's an environment switch and VAR1 is lost. I've tried exporting DISPLAY from the running user's .profile and/or .bashrc but it doesn't work either. Is there another way to do this? Is this even possible? Am I overlooking something? Many thanks

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Logging won't stop on log file after renaming/moving it.... how do I stop it?

    - by Jakobud
    Just discovered that logrotate is not rotating our firewall log. So its up to 12g in size. I need to split up the file into smaller chunks and start manually rotating them so I can get things back on track. However before I start splitting the firewall up, I need to stop the firewall from logging to the current firewall log file and force it to start logging to a new empty file. This way I'm not trying to split up or rotate a log file that is still constantly growing. I tried to simply do this: mv firewall firewall.old touch firewall I expected to see the new empty firewall file to start growing in size, but no... the firewall.old is still be logged to. Then I tried to start/stop iptables. No change. firewall.old is still the log file. I tried to move it to another directory. That didn't help. I tried to stop iptables, then change the filename and create a new firewall file and then start iptables again, but no change. How do I stop the logging on this file and force it to start logging on a new file?

    Read the article

  • FFmpeg convert video w/ dropped frames, out of sync

    - by preahkumpii
    I recorded a video using Bandicam with the MJPEG encoder to get the least amount of lag. Now, I am trying to convert that massive file to a h264 avi using ffmpeg. I know there are dropped frames in the video stream...more than 100 in the first two minutes, which I assume is simply because Bandicam dropped some when it couldn't keep up. So, when I convert the file to h264, the video and audio are out of sync, and appear to be more and more out of sync as output video progresses. Here is my basic command in ffmpeg: ffmpeg -i "C:\...\input.avi" -vcodec libx264 -q 5 -acodec libmp3lame -ar 44100 -ac 2 -b:a 128k "C:\...\output.avi" I have tried EVERYTHING I can think of including: -itsoffset [-]00:00:01 Tried this before and after input file. This doesn't work because as the video progresses it becomes more and more out of sync. -async 1 Doesn't work. -vsync 1 Doesn't work, but it does show dropped frames being duplicated. Two inputs of same file with mapping using -map 0:0 -map 1:1. Doesn't work. The source plays just fine. Any ideas how to convert it with ffmpeg and keep the audio and video synced? Thanks.

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >