Search Results

Search found 4596 results on 184 pages for 'moderately interesting'.

Page 125/184 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • laptop headphone jack problems

    - by Xitcod13
    A while back my headphones mysteriously started making static noise and one of them stopped working completely. At first I thought it was headphones so I bought new ones. Alas that did not solve the problem. The problem must be inside my headphone jack. I did some research online and they suggested unplugging USB devices. Which has a strange effect of changing the static noises to high frequency Morse code noises (it's the aliens). I don't have this problem when i listen to music on speakers. The static is there on headphones whether there is music or not. I own a soldering iron for electronics and I am quite skilled at soldering. I would appreciate any help I can get. My laptop is the HDX 18. It has 2 headphone jacks that act exactly the same. Interesting thing i just noticed is that when i pull out my headphones almost all the way both of them start working but so do the speakers making the headphones kinda useless. Maybe there is a way to turn of the speakers as a temporary solution. I am using vista x64.

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Mountain Lion overheating issue have to do with launchd/Python?

    - by Christopher Jones
    So, Ever since I installed ML, my MacBook Air has been running SUPER hot. Opened up activity monitor, and everything seemed to be pretty normal, until I had it refresh every .5 seconds... and then I started seeing some interesting things. A 'Python' process appears and is terminated several times a second, and uses TONS of CPU 70-110. It's parent process is 'launchd' - and when I sample the process, there is a lot going on with Python. http://db.tt/ovuX3hZM These appear and disappear too quickly to get one... this one only happened to be using 70 ish percent of CPU... but they consistently hit 100-110%. http://db.tt/ovuX3hZMg The parent process... launchd. lots of context switches and UNIX system calls... What is the deal here? (photo goes here when I earn the street cred) The sample of launchd. ANY help here could be of help to not only me, but possibly many others experiencing decreased battery life and warmer laps these days because of this Mountain Lion weirdness. PLEASE HELP! PS - I'd put the screen grabs inline, but i don't have enough street cred yet.

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • Windows AD DNS: Event ID 5504

    - by Chris_K
    Two of my AD controllers (both running DNS service) appear to be having a similar issue. Both are throwing lots of events in the DNS events that look like this: Event Type: Information Event Source: DNS Event Category: None Event ID: 5504 Date: 5/24/2010 Time: 11:51:38 AM User: N/A Computer: ALPHA Description: The DNS server encountered an invalid domain name in a packet from 76.74.137.6. The packet will be rejected. The event data contains the DNS packet. That will come with the same event, same time, with a packet from 76.74.137.7 as well. I know this is "Information" not an error, but since it is new and different it bothers me (yes, I fear unexplained change!) Both machines are running Windows 2003 R2 SP2. The DNS servers are not exposed to the internet. Both DNS servers are configured to use OpenDNS for Forwarders. For both servers, this started about a week ago. Any thoughts on: 1) should I be concerned? 2) how can I stop/fix this? To keep it interesting, I have a 3rd AD / DNS box. Same domain, different Active Directory site. Same forwarders, yet doesn't have this issue.

    Read the article

  • How can I find out which processes on my computer are accessing the microphone?

    - by Vinayak
    After reading this interesting Lifehacker post and reading the comments on the page, one person was wondering if it would be possible to use the Physical Device Object Names of other hardware such as the microphone to find out the names of processes using that device. I tried the same approach, but so far it only seems to work for the webcam. Is there any other way I could get this to work in Process Explorer? UPDATE: The Lifehacker post was about finding out which Windows process is currently using your webcam. This is how they went about doing it: Start Device Manager (WIN+R → "devmgmt.msc" → OK) Find your webcam among the list of devices (check under Imaging Devices) Open the properties window of the device and switch to the Details tab (Right click → Properties → Details) In the dropdown menu, select Physical Device Object Name and copy the string(Right click → Copy) Download Process Explorer Make sure you have opened Process Explorer in Administrator Mode(File → Show Details for All Processes) Hit CTRL+F and enter the string you copied earlier(it should be something like \Device\000000XX) Hit the Search button and you should see a list of processes using the webcam(if there are any)

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes.

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • Cleaning a proxy/phishing trojan from Windows XP computer

    - by i-g
    I am trying to remove an interesting trojan from a Windows XP computer. It manifests itself as a phishing page (screenshot linked) that appears after the user tries to log on to eBay. So far, I haven't found any other web sites that are affected. As you can see, the trojan intercepts browser connections (all installed browsers are affected) and injects this phishing page. The address looks like it's ebay.com, but HTTPS verification doesn't work (no lock icon or green bar in Firefox.) At some point, Trojan.Dropper appeared on the computer. I removed it with Malwarebytes Anti-Malware. Although it reappeared several times, it seemed to be gone after I booted into Safe Mode and did a full system scan with MBAM. Now, however, a different trojan has appeared on the machine; I suspect it was installed by Trojan.Dropper. So far, MBAM, Ad-Aware, and Spybot S&D have been unable to remove it. I've looked for it in the HijackThis log but haven't found anything conclusive. Has anyone run across a trojan like this before? Where would I start looking for it to remove it manually? Thank you for reading.

    Read the article

  • What is the collaborative screen shot/diagramming application recently featured on Hacker News and p

    - by wonsungi
    A few days ago, I saw this video for a screen capture application. I'm pretty sure I followed a link from Hacker News, possibly to a Life Hacker article. The video was very short, but demonstrated how the application could be used: The application was basically a movable/resize-able view port with a button. When the button is pressed, the contents of the view port are saved to an image (basically a screen capture.) The interesting thing is what you could do after that point. One of the specific examples from the video browsed to Google maps street view, grabbed a photo of an intersection, then scribbled notes about where to meet and where the restaurant was in colored "marker." Another example shown was grabbing a house layout from from CAD tool, then scribbling notes on it. The last part of the video showed several possible uses being scrolled through the application's view port. Now, it seemed it was very easy to share these images with other people because there was some type of integration, either with their own site and/or common social websites/chat services. The application was shown running on both Windows and Mac. edit: I think there was an iPhone app, as well. Anyone know what this application is? I tried searching Google, Hacker News, and Life Hacker already. It is not Jing.

    Read the article

  • What keeps you from changing your public IP address and wreak havok?

    - by Whitemage
    An interesting question was asked to me and I did not know what to answer.. So I'll ask here. Let's say I subscribed to an ISP and I'm using cable internet access. ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let's say, 60.61.62.75 and mess with another consumer's internet access? For the sake of this argument, let's say that this other IP address is also owned by the same ISP. Also, let's assume that it's possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gaetway, a network address and a broadcast address. So that's 3 addresses the ISP "loses" to you. That seems very wastefull for dynamically assigned IP addresses where the majority of customers are.. Could they simply be using static arps? ACLs? Other simple mechanisms? Anyone who worked at an ISP would be willing to explain this a bit?

    Read the article

  • GlassFish v3: Security related updates + Repository/Publisher?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the contents of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the dev repository for security updates, or does it contain unstable updates? (The name suggests unstable updates, but the version numbers, like "3.0.1,0-11:20100331T082227Z" leave me guessing. The build is more than a week old, so it's obviously not "nightly" or "weekly", but what is it?) Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • Why are FMS logs filled with 'play' event status code 408 for a failed webcast?

    - by Stu Thompson
    Recently we had a live webcast event go horribly wrong. I'm doing the technical post-mortem, with limited information. We know that the hardware encoder (a Digital Rapid Touch Stream Web HDI) was unable to send upstream at a sustained reliable high rate. What we don't know is if the encoder's connection was problematic (Zürich), or that of the streaming server (in Frankfurt). Unfortunately, I've got three different vendors all blaming each other (the CDN who runs the server, the on-site ISP and the on-site encoding team.) In the FMS log files I see a couple of interesting things: Zillions of Status Code 408 on play event entries for clients. Adobe's documentation stats that this "Stream stopped because client disconnected". ("Zillions" would be a ratio of 10 events for every individual IP address.) Several unpublish / (re)publish events per hour for the encoder I'd like to know if all those 408s could tell me with authority that the FMS server was starved for bandwidth, or that the encoding signal was starved (and hence the server was disconnecting clients.) Any clues?

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Cisco ASA 5505: Force NAT before IPsec?

    - by WuckaChucka
    I'm trying to route public-to-public IPs over an IPSec tunnel. However, the src IP is not "interesting" to the Cisco's IPSec engine because it doesn't appear to be getting translated to the outside IP before being evaluated by the Cisco's IPSec engine. From WEST to EAST, my public-to-public IPSec works fine: I can make a request from 192.168.0.5:any to 200.200.200.200:80 because the Vyatta does the NAT translation before the IPSec tunnel inspects the traffic, so the remote-subnet and local-subnet matches (see below). However from EAST to WEST, I see a deny in my Cisco logging buffer for Deny tcp src inside:192.168.1.5/59195 dst outside:100.100.100.100/80 which leads me to believe that the IPSec engine is not matching the encrypt_acl because the address has not been translated yet. Any ideas? WEST (Vyatta): inside: 192.168.0.0/24 inside host: 192.168.0.5/24 outside: 100.100.100.100 IPSec local-subnet: 100.100.100.100/32 IPSec remote-subnet: 200.200.200.200/32 EAST (Cisco): inside: 192.168.1.0/24 inside host: 192.168.1.5/24 (DNAT'ed on port 80 to outside) outside: 200.200.200.200 IPSec local-subnet: 200.200.200.200/32 IPSec remote-subnet: 100.100.100.100/32

    Read the article

  • "Path Not Found" when attempting to write to a sub folder within a mapped drive

    - by Adam
    We have an interesting issue with one of our server shares, or possibly, our Win 7 desktops. When our users try to save files in a sub folder, either via copy/paste or through an application, to a mapped drive on our DC they receive an error saying "Path not found". They can however browse this folder and open files from it. This is where the "Path Not Found" error doesn't seem to stack up in my opinion. Users can however save files fine in the root folder of the mapped drive, it appears only to affect sub folders. It seems to be random which users and machines this affects. The users can log on to a different machine and be able to save in sub folders fine, on the same mapped drive. Event viewer hasn't been much help either. Currently, the only solution we have found is to image the machines affected which solves the issue. Our servers are Server 2008 R2 with Win 7 Pro desktops. Any help/pointers/suggestions would greatly be appreciated.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >