Search Results

Search found 3112 results on 125 pages for 'daylight saving'.

Page 90/125 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • RHEL Java Application returns "No space left on device" but only 3% used

    - by FiveO
    My Java Application returns following Exception when saving a new file in /opt/wso2 on a CentOS 6.4: Caused by java.io.FileNotFoundException: ... (No space left on device) Caused by: java.io.FileNotFoundException: /opt/wso2/FrameworkFiles/trk_2014062500042488825_TRCK_PatfallHospis_pFromHospis_66601fb3-a03c-4149-93c3-6892e0a10fea.txt (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:99) at com.avintis.esb.framework.adapter.wso2.FrameworkAdapterWSO2.sendMessages(FrameworkAdapterWSO2.java:634) ... 23 more But when I run df -a I can see that the partition still has plenty of space available: [root@stzsi466 wso2]# df -a Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_stzsi466-lv_root 12054824 2116092 9326380 19% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys devpts 0 0 0 - /dev/pts tmpfs 4030764 0 4030764 0% /dev/shm /dev/sda1 495844 53858 416386 12% /boot /dev/sdb1 51605436 1424288 47559744 3% /opt/wso2 none 0 0 0 - /proc/sys/fs/binfmt_misc [root@stzsi466 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_stzsi466-lv_root 765536 45181 720355 6% / tmpfs 1007691 1 1007690 1% /dev/shm /dev/sda1 128016 44 127972 1% /boot /dev/sdb1 3276800 6137 3270663 1% /opt/wso2 What is the problem here? Is it caused by the Java on CentOS 6.4? I have another server running Redhat REHL 6.4 and all works fine - same Java etc. Does anyone know of this problem?

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • Subdocument in Word won't save

    - by ChrisW
    Because I know Word has a history of not liking very large documents (my supervisor specifically told me not to use LaTeX... grr), I decided to learn the Master document / subdocument feature of Word when writing my PhD thesis. I have the title page / table of contents etc in the master document, and each chapter as a separate document. However, when I save the master document, it appears to save all the chapter documents apart from one (Chapter 4), for which it brings up the Save Document dialog box, helpfully with "Chapter4.docx" in the "Save as" box (n.b. Chpater4.dox is not open). Clicking save does nothing, and doesn't make the dialog box go away. Saving as a different document means that my changes aren't reflected in the same document. There must be some reason Word doesn't like this particular document but I've got no idea why - there's nothing special in it that isn't in any of the other chapters. I have tried closing all documents, renaming Chapter4.docx, opening the master document, expanding all documents, OKing the warning that Chapter4.dox does not exist, and inserting the 'new' document, but even when I save the master document it still won't save the new Chapter4 document. If anyone knows any reason why Word is acting like this (or if I'm doing anything stupid), I'll be eternally grateful (p.s. sorry for the long rambling message. It's late; I've been working on my PhD 4.5 years, I really really want to throw this computer out the window, and I hope people are kind enough not to downvote this question because of it's rambling nature!) Update With Word closed, I've tried to delete Chapter4.docx (having made a backup!) - but I get a warning that it can't be deleted because it's open in Microsoft Word... these files are on a network drive and the same problems are happening on 2 different computers. I could login to the filestore through ssh and force the file to be deleted, but I'm curious to know why this is happening!

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Shortcut to "printer and faxes" on another computer

    - by Doltknuckle
    I have a print server running windows server 2008 that has about 50 printers on it. In windows XP, I was able to connect to the server using the UNC name and make a shortcut to the "printers and faxes" folder. (For the record, I know that it really isn't a folder, but that's outside the scope of this question.) I have recently switched to windows 7 and I find that the jump lists are really useful. One of the things I want to do is make it easy to connect to that server's "printers and faxes" folder. I would like to use something like a shortcut that I can open and go immediately to that location. The problem is that windows 7 doesn't have a way to create a shortcut like you could in WinXP. They have a button on the toolbar that says "view remote printers" which sends you to the correct folder. I'd like to avoid having to type out the server name. I also can't use the "view network" link in windows explorer. Our organization has over 6,000 machines and viewing the network lists all of them. This is all about saving time by using the minimum number of mouse clicks and key presses in normal operation. Does anyone have any suggestions?

    Read the article

  • How to embed/hardcode SRT subtitles into mp4 videos with VLC?

    - by Jens Bannmann
    I'm looking for a way to "burn in" or render/rembed/hardcode subtitles (from an SRT file) into an MP4 video with VLC. But no matter what options I use, it never works properly. I get a file that plays video way too fast (audio is normal), or one that plays normally, but actually does not have embedded subtitles. Also, with some options (like the one below) it does not play in QuickTime, only in VLC. So the main question is: how can I make this work in VLC? Secondary questions are: How do I decide which options I should set? Which settings are best if I want to leave the file bitrate etc. the same as much as possible, only embed subtitles? It seems I cannot leave the field empty or Video/Audio unchecked, so I guess I would first need to figure out the original audio and video bitrate. What do the "Scale" and "Channels" options mean? ... none of which are answered within the VLC documentation. For example, this is one set of options I used in the "Advanced Open File…" dialog: Advanced Open File… myFileName.mp4 [ ] Treat as a pipe rather than as a file [x] Load subtitles file: mySubtitleFileName.srt [ ] Play another media synchronously [x] Streaming/Saving Streaming and Transcoding Options [ ] Display the stream locally (o) File [outputFileName.mp4 ] [ ] Dump raw input Encapsulation Method: (MPEG 4 ) Transcoding options [x] Video (mp4v ) Bitrate (kb/s) [256 ] Scale [1 ] [x] Audio (mp3 ) Bitrate (kb/s) [128 ] Channels [1 ]

    Read the article

  • Alternative Bluetooth Stacks for Windows 7 64bit

    - by Martin
    I have a notebook with an inbuilt Broadcom BCM2046 bluetooth adapter and several bluetooth HID-devices (mice, keyboards etc.) The operating system is Windows 7 64 bit Professional. The HID-devices all work perfectly with other computers, but on the system mentionend above, problems with some power-saving features inside the HID-devices occur (see eg. Amazon reviews for Microsoft Mobile Keyboard 6000 not waking up). I have tried the bluetooth drivers supplied by Windows update and the latest Broadcom drivers directly from the Broadcom updater software. The problems persist (I can rule out any further configuration issues or alternative device drivers, I have tried every possibility). I have tried a trial version of the BlueSoleil Bluetooth stack and it solved the wake-up problem. However the BlueSoleil stack causes some other problems, is relatively expensive and I would prefer not to use it. My question: are there any other alternative bluetooth stacks availible for Windows 7 64bit? To my knowledge there used to be Toshiba Bluetooth stack for non-Toshiba hardware, but the older versions I have found on the internet do not install, they do not seem to recognize the bluetooth hardware when installing the driver.

    Read the article

  • Router vs switch in a LAN [closed]

    - by servernewbie
    If I have a LAN and and connect it with a switch, I understand it uses a CAM table to route packets in layer 2 (by saving mac to port relations). So far all good. However, when using a router for a LAN (ONLY for a LAN, not to connect it to "the outside" WAN/internet/etc) I get a bit confused as to how it internally processes packets. I would first split this into two router scenarios: Router with buit-in switch In this scenario, I would expect that it will act exactly as a switch with a CAM table internally. This would probably benefit a bit in speed (guessing here?) compared to the next option. Router without built-in switch Here is where I get confused. If hostA wants to send a packet to hostB, it will ARP to find hostB's MAC address and send it there. Now, if we had a switch (above scenario) this would be easy. But how does it work now in a router WITHOUT a switch? If I would guess, hostA would send an Ethernet frame with hostB's MAC address to the line. The router would fetch the packet (even though the router has another MAC address, it would still fetch this packet even if it only contains hostB's MAC address). It would strip the Ethernet frame header and check the IP, and then check its own internal ARP table again for the MAC address. Now, this would seem like a waste of resources compared to a router with a built-in switch. But maybe it does not work like that at all. Does it also contain a CAM table? If that would be true, what would then the difference between these two routers really be?

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • Wireless keeps shutting off in Windows 7

    - by Nathan Adams
    I have Windows 7 Ultimate 32bit installed on a Dell Latitude XT Tablet and for the life of me I can't figure out this really weird problem. The symptom is that the Wireless will disconnect from the AP and if I tell it to scan again, it says there are no APs in the area. I do have another wireless card in the laptop and if I disable the first one and enable the second, I am able to get onto the wireless however if I want to use the first card again I have to restart. I tried enabling/disabling the device, nothing will kick start the wireless again in the first card without a restart. I even tried different drivers. So, it seems it is random but it does occur more often when there is increased network activity (ie downloading a large file). The laptop doesn't seem to be overheating. I have tried the following: Under "Change Advanced Power Settings" for the current power profile, I set the "Wireless Adapter settings" to "Maximum Performance". Under device manger, I went to the card in question, and went to the advanced tab and set the "Power Saving mode" to "MAX_PSP" Both cards I have seem to exhibit the behavior after awhile. Both models of those cards are: Dell Wireless 1505 Draft 802.11n WLAN Mini-Card Gigabyte GN-WS30N 802.11n mini WLAN Card Has anyone have any ideas or ran into this before?

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • When ran as a scheduled task, cannot save an Excel workbook when using Excel.Application COM object in PowerShell

    - by Daniel Richnak
    I'm having an issue where I've automated creating an Excel.Application COM object, add some data into a workbook, and then saving the document as an xlsx. This works fine if: I'm already in Powershell interactive host and either run each command in sequence, or execute as a ps1. I run it from cmd.exe, using the syntax: powershell.exe -command "c:\path\to\powershellscript.ps1" I create a scheduled task in Windows 7 / Server 2008 R2, use the above powershell.exe -command syntax, and use the mode "Run only when the user is logged on". It fails when I modify the same scheduled task, but set it to "run whether the user is logged on or not". Here's a sample script that illustrates the problem I'm having: $Excel = New-Object -Com Excel.Application $Excelworkbook = $Excel.Workbooks.Add() $excelworkbook.saveas("C:\temp\test.xlsx") $excelworkbook.close() I have a theory that the COM object fails somehow if my profile isn't loaded / if it's not performed in a command window. Any ideas on which options to choose when creating the scheduled task, or which options to use when creating the Excel object or using the SaveAs() function? Can anybody reproduce this? I've been able to see this behavior on both a Server 2008 R2 machine, and Windows 7. Haven't tried other platforms.

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • Track kids browsing history even when they know how to clear it manually

    - by Darren Newton
    I have a colleague with two teenage boys (yes, cue cliche's about 'I have this friend see...') He's currently having issues with them browsing pr0n and wants to do a little spying on their browsing (I'm staying clear of the philosophies/ethics on this.) The kids are savvy enough to clear their browsing history when they're done. As I'm his goto for IT he has asked me if there is a way to keep a hold of the browsing history. The family uses Macs, and the kids surf with Safari. I know that browsing history is kept here ~/Library/Safari/History.plist. I figure there should be a way to write either an AppleScript or other script (Python/Ruby/Bash) that can backup this file to a different location (/opt/local/history, etc.) Since the kids know to clear their history when they're done should the file be periodically backed up with something similar to a cron job or something like Hazel? While that could work it seems like it would create a ton of little incremental backups. Or is it possible to 'watch' ~/Library/Safari/History.plist and incrementally add changes to a backup file (saving a diff so to speak) but not lose any data? Any ideas/solutions appreciated. UPDATE/EDIT: Got the word from concerned dad that the oldest uses Firefox on a different PC, so the OpenDNS solution (preferably at the router level) is the best answer so far as it would capture usage for the whole house.

    Read the article

  • Light Blue Monitor Screen

    - by SixfootJames
    I have seen this before with an older monitor that over time, the monitor colours change to a light blue haze. This has started happening with an older monitor of mine now (A GigaByte Monitor) and although none of the pins are bent and it's a brand new machine, there is no reason, other than aging that it should show the light blue screen. Perhaps it is just time for a new monitor, but if there is a way of saving it still. I would appreciate the insight. Perhaps there is something I have not tried, perhaps it has something to do with the new machine instead of the monitor? I had the monitor plugged into two other machines over the weekend and didn't have this problem. So I am not quite sure what to make of it. Many thanks! EDIT: I must also add that when I plugged the monitor into the older machines, I had the VGA converter attached to the end of the newer DVI output. Which, when plugged into the newer PC, I don't need of course.

    Read the article

  • Internet Explorer ignoring PHP setcookie() from CentOS server

    - by Hussein Sabbagh
    In the past we used a windows XAMPP server for an internal website. It worked fine but had some intermittent issues and we decided to move to a LAMP server on CentOS. We made the switch today but it turns out Internet Explorer ignores every attempt I make at saving a cookie. There is no underscore in the URL being used... the URL is actually the same as the one the XAMPP server used, where I was able to save cookies without any problems. It really doesn't make any sense to me, all of the code is the same. The only thing to change is the version of PHP and the server OS. The website works on all other browsers except IE. I can't even make a simple setcookie call. On a blank test page I use setcookie("test", "test", time()+36000, "/"); sleep(5); print_r($_COOKIE); and there is nothing there. Our users can't log into the website because of this and I have no idea what the issue is. If anyone can provide any clues or resolutions I would greatly appreciate it. Obviously the easy answer is to not use IE, but that is not an option in this case.

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >