Search Results

Search found 11493 results on 460 pages for 'customer success'.

Page 338/460 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • How this could happen to my ftp server?

    - by srisar
    hi, again me, i just dont get why i m getting this message, when i try to connect my ftp server thorugh my wan address (112.135.26.115) its saying me, 530 user access denied but when i give the same data to http://ftptest.net the result is as follows... Status: Resolving address of 112.135.26.115 Status: Connecting to 112.135.26.115 Status: Connected, waiting for welcome message Reply: 220-FileZilla Server version 0.9.34 beta Reply: 220-written by Tim Kosse ([email protected]) Reply: 220 Please visit http://sourceforge.net/projects/filezilla/ Status: CLNT http://ftptest.net on behalf of 112.135.26.115 Reply: 200 Don't care Status: USER saravana Reply: 331 Password required for saravana Status: PASS ********* Reply: 230 Logged on Status: SYST Reply: 215 UNIX emulated by FileZilla Status: FEAT Reply: 211-Features: Reply: MDTM Reply: REST STREAM Reply: SIZE Reply: MLST type*;size*;modify*; Reply: MLSD Reply: AUTH SSL Reply: AUTH TLS Reply: UTF8 Reply: CLNT Reply: MFMT Reply: 211 End Status: PWD Reply: 257 "/" is current directory. Status: Current path is / Status: TYPE I Reply: 200 Type set to I Status: PASV Reply: 227 Entering Passive Mode (112,135,26,115,43,9) Status: MLSD Reply: 150 Connection accepted Listing: type=dir;modify=20100322113235; it_is_working!_Andrejs_Cainikovs_from_serverfault.com Listing: type=file;modify=20100322110559;size=5; New Text Document.txt Reply: 226 Transfer OK Status: Success can anyone say why this happens to me? please im trying the whole day!!

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

  • Windows CE restores default configuration after restart (Motorola MC3190)

    - by jarek
    Good morning, everyone! I have a problem with Motorola MC3190 hand scanner running on Windows CE. I've got few of those to make a new program for some kind of warehouse. There is already installed program which have been used by the customers before, so I delete this one and instead I install my new software which I've just made. It is running very well, but when I take out the battery and leave the device for entire night without power supply, it restores the whole configuration, so the old program is back, the wireless configuration is back and... Yeah. The scanner is restored to the configuration which was running when I received it few weeks ago. What I want to do is to set the whole configuration of the scanner so after a long power-off my program and my configuration will be restored. I truly believe someone knows how to do it. The time is running out, and I believe that the customer will be kind of annoyed when he will change the battery and the program which he bought will be gone. ;-) Regards, Jarek

    Read the article

  • Confused about the Windows 7 Preinstallation Kit

    - by David Brown
    I build custom PCs and would like to use the Windows 7 Preinstallation Kit to make installation go a little quicker and customize the Windows image. However, since each PC is built to a particular customer's specifications, the hardware will rarely be the same. So, I would like to have a single answer file that will work for everything. I'm not sure if that's possible, however. What I mostly want to do for now is add my support information as well as pre-set anything that I would normally change after each installation completes. I have a Windows 7 Professional Upgrade DVD set (both 32-bit and 64-bit), but no OEM disks. I copied the Install.wim file to my local drive and opened it in the Windows System Image Manager, but it asks me to choose a catalog file specifically for each edition of Windows 7. Will this limit the answer file to whichever edition I choose? I would think choosing Starter would give me the most basic settings, which would apply to all other editions, but I'm not entirely sure of this. I don't intend to install any extra applications or drivers. I merely want to insert an OEM disk, my OPK USB drive, and have it work for whatever edition of Windows 7 I'm installing. If a large number of similarly-configured PCs need to be built, I'll go ahead and create a custom answer file in that case, but for a single machine order, that seems like overkill. In addition, do I need a separate answer file for 32-bit and 64-bit versions of Windows 7? Or will it work for both, even though I copied the Install.wim file from the 32-bit disk? Thanks!

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Windows 7 Not Recognizing Camera Nor iPhone as Camera

    - by taudep
    I've been struggling with this one for a few days. I've recently upgraded an older computer to Windows 7 Home Premium. Neither my digital camera (A Canon SD1200IS) nor iPhone are ever detected as cameras, nor ever show up as accessable in Explorer. With the Canon camera, no driver is required. It's supposed to work with the default Windows 7 drivers. However, in the Control Panel's Device Manager, I'm always seeing a yellow icon next to the "Canon Digital Camera" device. I've uninstalled the device and let Windows attempt to reinstall, but it can never find a driver to install. With the iPhone, it's very similar. One big difference, though, is that iTunes can see the iPhone and back it up, etc. However, again when I go to the Device Manager, there's a yellow icon next to the iPhone. I've uninstalled iTunes, reinstalled, rebooted, deleted drivers, and let Window try to reinstall the driver, but it can never find the driver. So there seems to be some correlation that my machine can't detect cameras properly, and that it might be even a lower-level type of driver I'm struggling with. I know that USB however, does work, because I have have an external drive hooked into the machine. I've gone through the web and tried two hours worth of fixes, without success. I feel like if I can get the Canon camera detected, then the iPhone will be on it's way to being fixed too. BTW, I couldn't really find anything of use in the Event viewer. Any and all suggestions welcome.

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Installing Linux on an Asus p8z68-m PRO Motherboard

    - by Holland
    Here is a challenge: how is this done? I've tried disabling the ASM1061 controller in the Onboard Devices section, using Wubi, booting from USB (as I don't have a DVD drive, yet), and even booting from RAID/IDE (with AHCI as the default) to do this. Still, no dice. Google shows up virtually nothing related about Linux and this mobo, apart from a people just saying "disable ASMedia" (which, I assume is the ASM1061 controller, as that's all I see - apart from the USB 3.0, which I disabled already) and it hasn't really helped much. Thus, what is wrong here? Edit My problem is that I cannot boot Linux via USB or a simple Windows installer such as Wubi (for Ubuntu). I wind up getting error messages along the lines of write cache failed, along with many other cryptic error messages similar to the following: [ 1400.351374] sd 4:0:0:0: [sdb] Test WP failed, assume Write Enabled [ 1400.353433] sd 4:0:0:0: [sdb] Asking for cache data failed [ 1400.356601] sd 4:0:0:0: [sdb] Assuming drive cache: write through This seems to be common for Asus P8Z68-M Pro motherboards, with the only notable solution being to "disable ASMedia", which, as I said before, I'm guessing is the ASM1061 controller on the motherboard. Despite already disabling this, I have tried this with both Fedora and Ubuntu without any success. I need to know what I can do about this; has anyone ran into something similar or heard about this issue before? I know these motherboards are relatively new...

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • Excel data into PowerPoint slides

    - by nqw1
    I have already found some helpful sites but I'm still unable to do what I want. My Excel file contains few columns and multiple rows. All the data from one row would be in one slide but data from different cells in that one row should go to a specific elements in PP slide. At first, is it possible to export data from an Excel cell into a specific text box in PP? For example, I would like to have all data from the first column of each row go to a Text box 1. Let's say I have 100 rows so I would have 100 slides and each slide would have Text bow 1 with correct data. Text box of slide 66 would have data from the first column of row 66. Then all data from the second column of each row would go to a text bow 2 and so on. I tried to do some macros with bad success. I also tried to use Word outlines and export them into PP (New slide - Slides from Outline) but there seems to be a bug since I got 250 pages of gibberish. I had only two paragraphs and both had one word. First paragraph used Heading 1 style and second paragraph used Normal style. Sites what I have found, use VB and/or some other programming language to create slides from Excel sheets. I have tried to add those VB codes into my macros but none of them hasn't worked so far. Probably I just don't know how to use them correctly :) Here's some helpful sites: VBA: Create PowerPoint Slide for Each Row in Excel Workbook Creating a Presentation Report Based on Data Question in Stackoverflow I use Office 2011 on Mac. Any help would be appreciated!

    Read the article

  • Batch file reads in text file and replaces quotes with variables into new file?

    - by John
    I have customer Pizza lists that i have saved into 5 seperate txt files from my database in this format: Filename = 25Percent.TXT "555-1211" "555-1212" "555-1223" ... ect Each list is a phone number in quotes and each list varies in length. Part 1: I have two sets of variables that i would like to replace with each quote in the 5 text files. The two variables would be like: Var A = < Discount Pizza price for phone number is " Var B = " 25 % So i would like to run a batch file that reads each line in the text file and writes into another text file the following, replacing the quotes with the variables: New Filename = 25Percentfinished.TXT < Discount Pizza price for phone number is "555-1211" 25 % < Discount Pizza price for phone number is "555-1212" 25 % < Discount Pizza price for phone number is "555-1223" 25 % Then I would repeat for 30percent.txt, then 35percent.txt, 40percent.txt, and then finally 50percent.txt file. Part II: I would also like to append the 5 new files together, with the append command? I am assuming that the SET Var command would also be used? Not sure what to do here.

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • Acer Aspire one is falling apart on Ubuntu

    - by Narcolapser
    Question: How do I have my netbook detect and install hardware? Info: I have an Acer Aspire One, it came with windows XP and I loaded it with Win7. I decided I wanted to change to Ubuntu so I tried Ubuntu netbook remix, which failed horribly, and so after attempting 3 or so other OS's I ended with Ubuntu Desktop 9.10. Which worked fine for a while, but there were some minor issues so I asked a question about it here and decided to change my OS again. This last weekend I tried mandriva like that guy in my other question suggested, no success. when I had though my netbook lost the ability to use it's touch pad, I didn't think much of it, just thought it must be a driver or something. But When Mandriva failed, and I also while I was at it tried Damn small linux and Debian, which both failed to, I decided to switch back to Ubuntu Desktop(some where in here my keyboard stopped working for one attempt to). But first I gave the netbook remix one more try. it worked this time, with the exception of it didn't have any networking. I thought it was a driver issue again and finished the weekend with ubuntu desktop 9.10 again. But now things get really crazy. it doesn't know it has a wireless card or an ethernet card. It doesn't know my phone is connected trying to provide wireless broadband either. I'm clueless on what could be the problem. And with only a minimal amount of experience with Ubuntu can't navigate the entire interface with only my keyboard(it doesn't detect a USB mouse when I plug it in, it had when I installed it. in fact the network interfaces were working just fine when I live boot ubuntu to installed it). Even so, I don't know where to go or what to do to make it recognize it's hardware. I'm in a dire situation, any help is welcome.

    Read the article

  • How to generate thumbnails for less common video containers (mkv, ogm, mp4, flv, rmvb and mov) in wi

    - by fluxtendu
    So how to generate thumbnails for these containers? I know that the install of "DivX Plus Tech Preview: MKV on Windows 7" does it for MKV. But I think that only some registry changes are really necessary and I want it for other containers. If it's possible to avoiding the install of (always bloated) codecs packs it would be nice... Maybe only installing ffdshow or essential and separated codecs. (some time ago I have try reg files for vista without success...) Update: I have installed Win7codecs & tweaked a little its settings and I got almost everything I want. (I have also re-installed the relevant part of the divx plus tech preview to get something else than an all black preview for MKV) Issues that I still want to resolve: Find a cleaner & lighter method Almost all my rmvb and mov files got an all black preview (installing real media/quick time alternative doesn't help, is it the same with the officials?) With almost all containers (avi, mkv, ogm, mpg), I have (few) random files that don't get the preview. I could play them in WMP or another player and don't have found a pattern in the codecs used. All wmv, flv and mp4 have previews but I have less files in these containers. (I clear my thumbnails cache to test them) More generally I would like to understand how windows handle the containers & codecs to generate the previews. And a software that let me choose arbitrarily the pictures previewed would be convenient too

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Why Is ModSecurity Unable to Access the Data Directory?

    - by tommytwoeyes
    Update I think we've solved this; the problem appears to have been a result of the /modsec_storage directory having an incorrect value for its SELinux context type. However, we're still not sure, because although after I changed the SELinux context type value, Apache was able to create files in that directory for the global and ip collections (global.dir/global.pag and ip.dir/ip.pag), the new files still have zero bytes. I'm new to ModSecurity and am not sure if the files are empty because something is wrong with the configuration or if ModSecurity has simply determined it doesn't need to store IP addresses persistently after each transaction ends. Anyone able to offer guidance here? I've recently installed ModSecurity (v2.5.12 / CRS v2.0.8) on our production server, and everything works great, except for these errors that it keeps writing to the Apache error log: Failed to access DBM file "/modsec_storage/global": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] Failed to access DBM file "/modsec_storage/ip": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] After following the instructions for file permission settings in the ModSecurity handbook by Ivan Ristic, with no success, I created a /modsec_storage directory, set the owner & group to apache, and set the permissions for the directory recursively to 777. However, ModSecurity is still reporting the same permission errors, so I am stumped. Can anyone tell me how to fix this?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

  • Configuring OS X L2TP VPN to use Certificate for IPSEC layer instead of Pre Shared Key

    - by Matthew Savage
    I'm trying to setup a L2TP VPN on an OS X Snow Leopard Server setup, and have had success using a pre-shared key, however I would rather not rely on a simple string, and use a certificate instead. Setting this up on the server side is seemingly easy, you simply select a certificate you have generated from the list, and hit apply, however when I try to use the certificate on the client side it fails. I have exported the certificate into a P12 file, and then transferred to the client, and imported into the login keychain, however when I try to choose the certificate (from Network preferences, clicking Authentication Settings, then selecting Certificate and pressing Select) I am shown the following error: No machine certificates found Certificate authentication cannot be used because your keychain does not contain any suitable certificates. Use Keychain Access to import the certificate into your keychain. If you do not have the certificates required for authentication, contact your network administrator. Unfortunately even when I try to generate a certificate where I override the defaults, ensure the DNS name etc are set properly this doesn't change. When I select Certificate Authentication for the User Auth, and click Select the certificate for the server shows up there, but obviously this isn't where I need it to be available.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Problem importing pictures from a digital camera with Windows 7

    - by snark
    Hi, Since I'm using Windows 7 (Beta, then RC, now RTM), I have issues when I download pictures from my digital cameras. It happens with my 2 cameras: a Canon Powershot S2 IS and a Canon Ixus 80 IS. When I plug a camera (any of them) to a USB port and switch it on in Play mode, the Autoplay function of Windows 7 starts with this screen: I select "Import pictures and videos" to call the native Windows 7 tool. It searches a bit for pictures to download from the camera and starts the transfer. However, during the transfer, I often get errors like this one: If I use "Try again", it works fine the second time and the picture is retrieved correctly. It's very annoying when it happens 20 or 30 times in a 500-picture download. I cannot leave it running standalone, as I have to watch for the errors and click on "Try again". Any idea what is causing these errors? I tried changing the USB port (normally the cameras are connected via a USB hub but it happens also when I connect them directly to a MB USB port) and the USB cable, but no success. I also checked the SD card by connecting them with a card reader and running a ChkDsk on them but it found no errors on the cards. Update: No problem when I copy the pictures manually with the Windows Explorer. And no problem either when I access the card with a reader. The builtin import tool of Windows is convenient as it sorts the pictures automatically by date (1 folder per day). And this is the way I sort my pictures

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >