Search Results

Search found 4616 results on 185 pages for 'c strings'.

Page 140/185 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • What will be the better way for data retrieval on application that needs to handle limited amount of data.?

    - by Milanix
    This is not really a coding question since, I am not adding any code in here. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. Atleast I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance? It would really be a great help if anyone can give me better overview about it. Or may be counter argument against each. Many thanks!

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • What exactly is a X-YMailISG header?

    - by iainH
    Finally ... our emails are being seen by Yahoo! not as junk anymore. Hurray! However I notice that the Yahoo! receiving MTA adds in a X-YMailISG header. It's very large ... 2**10 bits? Now that I've invested too large a chunk of my waking life in crafting our email headers I'm curious to know what an X-YMailISG header is. Can anybody tell me? Does it pose any security / authenticity issues? There's very little intelligible from Google results. Background: After many days tweaking TXT records in our domain's DNS zone file for SPF and DKIM, I have at last succeeded in generating email from our Drupal site that Yahoo! no longer marks as X-YahooFilteredBulk and the excellent service [email protected] returns results that show the emails passing SPF, DKIM and Sender-ID checks and appearing to SpamAssassin as ham. Yahoo! even adds a Received-SPF: pass header. Useful links: http://www.goldfisch.at/knowwiki/howtos/dkim-filter http://old.openspf.org/wizard.html Strangely enough the SPF TXT record needed / allowed a blank key / name field in our registrar's DNS management panel whereas the DKIM record needed the {selector}._domainkey as the key /name of the DKIM strings.

    Read the article

  • Office365 SPF record has too many lookups

    - by Sammitch
    For some utterly ridiculous administrative reasons we've got a split domain with one mailbox on Office365 which requires us to add include:outlook.com to our SPF record. The problem with this is that that rule alone requires nine DNS lookups of the maximum of 10. Seriously, it's horrible. Just look at it: v=spf1 include:spf-a.outlook.com include:spf-b.outlook.com ip4:157.55.9.128/25 include:spfa.bigfish.com include:spfb.bigfish.com include:spfc.bigfish.com include:spf-a.hotmail.com include:_spf-ssg-b.microsoft.com include:_spf-ssg-c.microsoft.com ~all Given that we have our own large-ish mail system we need to have rules for a, mx, include:_spf1.mydomain.com, and include:_spf2.mydomain.com which puts us at 13 DNS lookups which causes PERMERRORs with strict SPF validators, and completely unreliable/unpredictable validation with non-strict/badly implemented validators. Is it possible to somehow eliminate 3 of those include: rules from the bloated outlook.com record, but still cover the servers used by O365? Edit: Commentors have mentioned that we should simply use the shorter spf.protection.outlook.com record. While that is news to me, and it is shorter, it's only one record shorter: spf.protection.outlook.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spf.messaging.microsoft.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com Edit² I suppose we can technically pare this down to: v=spf1 a mx include:_spf1.mydomain.com include:_spf2.mydomain.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com ~all but the potential issues I see with this are: We need to keep abreast of any changes to the parent spf.protection.outlook.com and spf.messaging.microsoft.com records. If anything is changed or [god forbid] added we would have to manually update ours to reflect that. With our actual domain name the record's length is 260 chars, which would require 2 strings for the TXT record, and I honestly don't trust that all of the DNS clients and SPF resolvers out there will properly accept a TXT record longer than 255 bytes.

    Read the article

  • ADODB DB2 DSN using IBMDADB2 provider

    - by Eli Sand
    I have a very bizarre issue with trying to establish a working connection to an IBM DB2 server from Classic ASP using ADODB. On my development server I am running IIS and have a local instance of DB2 running. When I create a system DSN on this server and try to connect to it with ADODB, I have to specify Provider=IBMDADB2; in my connection along with the DSN name - failure to include the provider and my connection won't work. On my production server(s), I have one running IIS and a second system running an instance of DB2. When I create a system DSN on the production IIS server and try to connect to it with ADODB, I cannot specify the provider, otherwise it throws an uncatchable error in an external module (I assume it's referring to the DB2 module) if I try to do anything past get a connection (oddly, opening the connection itself doesn't throw an error - but if I run a query it does). If I remove the Provider=IBMDADB2; from the connection string (thus I just have DSN=some_name), it works fine. On both systems I can verify through the ODBC connection manager that the DSN's work and can connect to the databases, and on both systems I have made sure to set the correct (only) instance of DB2 as the default. Can anyone tell me why I have to have different connection strings for the development and production servers? I would like to be able to use the same connection string for both environments if at all possible. If that means either specifying a provider for both, or for neither I don't care which - I would just like to know what's going on and how to fix it.

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Squid external_acl_type Cannot run process

    - by Alex Rezistorman
    I want to restrict uploading for group of the users via squid. So I've choosen to use external_acl_type but after reload of the squid it returns error. WARNING: Cannot run '/usr/local/etc/squid/lists/newupload.sh' process. Permissions of newupload.sh and squid are the same. newupload.sh is executive. How can I solve this problem? Thnx in advance. newupload.sh #!/bin/sh while read line; do set -- $line length=$1 limit=$2 if [ -z "$length" ] || [ "$length" -le "$2" ]; then echo OK else echo ERR fi done Strings from squid.conf external_acl_type request_body protocol=2.5 %{Content-Lenght} /usr/local/etc/squid/lists/newupload.sh acl request_max_size external request_body 5000 http_access allow users request_max_size Squid version squid -v Squid Cache: Version 3.2.13 configure options: '--with-default-user=squid' '--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--with-swapdir=/var/squid/cache/squid' '--enable-auth' '--enable-build-info' '--enable-loadable-modules' '--enable-removal-policies=lru heap' '--disable-epoll' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-translation' '--enable-auth-basic=PAM' '--disable-auth-digest' '--enable-external-acl-helpers= kerberos_ldap_group' '--enable-auth-negotiate=kerberos' '--disable-auth-ntlm' '--without-pthreads' '--enable-storeio=diskd ufs' '--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped' '--enable-log-daemon-helpers=file' '--disable-url-rewrite-helpers' '--disable-ipv6' '--disable-snmp' '--disable-htcp' '--disable-forw-via-db' '--disable-cache-digests' '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups' '--disable-eui' '--disable-ipfw-transparent' '--disable-pf-transparent' '--disable-ipf-transparent' '--disable-follow-x-forwarded-for' '--disable-ecap' '--disable-icap-client' '--disable-esi' '--enable-kqueue' '--with-large-files' '--enable-cachemgr-hostname=proxy.adir.vbr.ua' '--with-filedescriptors=131072' '--disable-auto-locale' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'CPP=cpp' --enable-ltdl-convenience Related post: Restrict uploading for groups in squid http://squid-web-proxy-cache.1019090.n4.nabble.com/flexible-managing-of-request-body-max-size-with-squid-2-5-STABLE12-td1022653.html

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Lars
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Can't compile CentOS 5, Ruby 1.9.2 and OpenSSL 1.0.0c

    - by pstinnett
    I'm trying to install Ruby 1.9.2 on CentOS 5.5. I get through most of the make process, but when it tries to compile OpenSSL I get an error. Below is the errror outputted: compiling openssl make[1]: Entering directory `/sources/ruby-1.9.2-p136/ext/openssl' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_x509.o -c ossl_x509.c In file included from ossl.h:201, from ossl_x509.c:11: openssl_missing.h:71: error: conflicting types for ‘HMAC_CTX_copy’ /usr/include/openssl/hmac.h:102: error: previous declaration of ‘HMAC_CTX_copy’ was here openssl_missing.h:95: error: conflicting types for ‘EVP_CIPHER_CTX_copy’ /usr/include/openssl/evp.h:459: error: previous declaration of ‘EVP_CIPHER_CTX_copy’ was here make[1]: *** [ossl_x509.o] Error 1 make[1]: Leaving directory `/sources/ruby-1.9.2-p136/ext/openssl' make: *** [mkmain.sh] Error 1 Any help would be greatly appreciated! I'm not a master at Linux by any means, but I was able to successfully install this version of Ruby on our dev server. Our live server is running a newer version of OpenSSL which I'm assuming is why it's breaking. Just not sure what the fix is!

    Read the article

  • Migrating Windows XP BOOT.INI Settings to Windows 7 Boot-loader

    - by Synetech inc.
    Two months ago my motherboard died, so I bought a used computer that came with Windows 7. I have since installed my old hard-drive, which had Windows XP on it, in this system. What I am trying to do now is to figure out a way to migrate the settings from XP's BOOT.INI into 7's boot-loader. Below is the BOOT.INI I used in XP (I have reduced the strings and updated the disks to point to the new location of the old HD. Oh and I am not clear on the drive letters. In XP, I could boot the recovery console or MS-DOS from a file in C:\ that contains the boot-sector. I am not sure what drive letter it would be called now—I had to manually change all the drive letters of the old partitions in Windows 7 because it auto-assigned them all wrong/differently). [boot loader] timeout=10 default=multi(0)disk(0)rdisk(1)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP (Safe)" /safeboot:network /sos /bootlog /noguiboot C:\CMDCONS\BOOTSECT.DAT="Recovery Console" /cmdcons C:\BOOTSECT.DOS="MS-DOS 7.10" /win95 I have looked around, and have only been able to find some bcdedit commands to add XP to the boot-loader, but none that include information on setting safe-mode for it (or changing any of the XP load options for that matter). Not surprisingly I suppose, I have not found anything on adding the XP recovery console or DOS to the Windows 7 boot-loader. (Yes, I tried EasyBCD, but that did not help; it had no options for XP, and the best I managed was to get a choice of booting 7 or normal-mode XP—choosing XP didn't even give the old XP boot menu.) Can anyone please tell me how to export the entries in XP's boot.ini to 7's boot-loader so that on boot, I can choose to load the following: Windows 7 Windows 7 (Safe-mode) (Windows 7 (The Win7 counterpart of the Recovery Console)) Windows XP Windows XP (Safe-mode) Windows XP (Recovery Console) MS-DOS 7.10

    Read the article

  • Subversion 1.6 + SASL : Only works with plaintext 'userPassword'?

    - by SiegeX
    I'm attempting to setup svnserve with SASL support on my Slackware 13.1 server and after some trial and error I'm able to get it to work with the configuration listed below: svnserve.conf [general] anon-access = read auth-access = write realm = myrepo [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasl2/my_sasldb mech_list: DIGEST-MD5 sasldb users $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP test@myrepo: userPassword You'll notice that the output of sasldblistusers2 shows my test user as having both an encrypted cmusaslsecretOTP password as well as a plain text userPassword passwd. i.e., if I were to run strings /etc/sasl2/my_sasldb I would see the test users' password in plaintext. These two password entries were created with the following subversion book recommended command: saslpasswd2 -c -f /etc/sasl2/my_sasldb -u myrepo test After reading man saslpasswd2 I see the following option: -n Don't set the plaintext userPassword property for the user. Only mechanism-specific secrets will be set (e.g. OTP, SRP) This is exactly what I want to do, suppress the plain text password and only use the mechanism-specific secret (OTP in my case). So I clear out /etc/sasl2/my_sasldb and rerun saslpasswd2 as: saslpasswd2 -n -c -f /etc/sasl2/my_sasldb -u myrepo test I then follow it up with a sasldblistusers2 and I see: $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP Perfect! I think, now I have only encrypted passwords.... only neither the Linux svn client nor the Windows TortoiseSVN client can connect to my repo anymore. They both present me with the user/pass challenge but that's as far as I get. TLDR So, what is the point of SVN supporting SASL if my sasldb must store its passwords in plaintext to work?

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Please, help writing a MIB

    - by facha
    I have a problem with an snmpwalk query returning snmp variables in a non-uniform way: .1.3.6.1.2.1.10.127.1.3.3.1.2.215 -> Hex-STRING: 24 37 4C 0C 65 0E .1.3.6.1.2.1.10.127.1.3.3.1.2.216 -> Hex-STRING: 24 37 4C 0B A2 DA .1.3.6.1.2.1.10.127.1.3.3.1.2.217 -> STRING: "$7L f:" .1.3.6.1.2.1.10.127.1.3.3.1.2.218 -> STRING: "$7L k2" As you can see, some variables are of a STRING type, others are Hex-STRING. So, I'm trying to write a simple MIB to force them all come out as Hex-STRING. This is where I've gotten so far: TEST-MIB DEFINITIONS ::= BEGIN PhysAddress ::= TEXTUAL-CONVENTION DISPLAY-HINT "1x:" STATUS current SYNTAX OCTET STRING test OBJECT-TYPE SYNTAX PhysAddresss MAX-ACCESS read-only STATUS current ::= { 1 3 6 1 2 1 10 127 1 3 3 1 2 } END However, snmpwalk doesn't seem to notice my textual convention (even though the "test" variable is being recognized). I still get a mixture of STIRNGs and Hex-STRINGs. Could anybody point to where is my mistake? snmpwalk -v2c -cpublic 192.168.1.2 TEST-MIB::test ... TEST-MIB::test.216 = Hex-STRING: 24 37 4C 0B A2 DA TEST-MIB::test.217 = STRING: "$7L f:"

    Read the article

  • Migrating Windows XP BOOT.INI Settings to Windows 7 Boot-loader

    - by Synetech inc.
    Hi, Two months ago my motherboard died, so I bought a used computer that came with Windows 7. I have since installed my old hard-drive, which had Windows XP on it, in this system. What I am trying to do now is to figure out a way to migrate the settings from XP's BOOT.INI into 7's boot-loader. Below is the BOOT.INI I used in XP (I have reduced the strings and updated the disks to point to the new location of the old HD. Oh and I am not clear on the drive letters. In XP, I could boot the recovery console or MS-DOS from a file in C:\ that contains the boot-sector. I am not sure what drive letter it would be called now—I had to manually change all the drive letters of the old partitions in Windows 7 because it auto-assigned them all wrong/differently). [boot loader] timeout=10 default=multi(0)disk(0)rdisk(1)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP (Safe)" /safeboot:network /sos /bootlog /noguiboot C:\CMDCONS\BOOTSECT.DAT="Recovery Console" /cmdcons C:\BOOTSECT.DOS="MS-DOS 7.10" /win95 I have looked around, and have only been able to find some bcdedit commands to add XP to the boot-loader, but none that include information on setting safe-mode for it (or changing any of the XP load options for that matter). Not surprisingly I suppose, I have not found anything on adding the XP recovery console or DOS to the Windows 7 boot-loader. (Yes, I tried EasyBCD, but that did not help; it had no options for XP, and the best I managed was to get a choice of booting 7 or normal-mode XP—choosing XP didn't even give the old XP boot menu.) Can anyone please tell me how to export the entries in XP's boot.ini to 7's boot-loader so that on boot, I can choose to load the following: Windows 7 Windows 7 (Safe-mode) (Windows 7 (The Win7 counterpart of the Recovery Console)) Windows XP Windows XP (Safe-mode) Windows XP (Recovery Console) MS-DOS 7.10

    Read the article

  • Numeric UIDs/GIDs in ACLs on OS X server (10.6)

    - by Oliver Humpage
    Hi, On one (old OS X 10.4) server I'm tarring up some files which have ACLs. I'm then using ``tar -xp'' to untar the archive onto a new 10.6 server, which doesn't have any users/groups configured on it yet except the default admin (UID 501) (there's a reason for that, don't ask!). Obviously this means an "ls -lne" will list files and ACLs with numeric UIDs and GIDs. Now for the normal file permissions it makes sense: you get UIDs like "1037". And for some ACLs, it also makes sense: you get things like "AAAABBBB-CCCC-DDDD-EEEE-FFFF00000402" for groups (0x402 = GID 1026) and "FFFFEEEE-DDDD-CCCC-BBBB-AAAA000001F5" for users (0x1F5 = UID 501). However, some ACLs have a UIDs like "E51DA674-AE70-41BC-8340-9B06C243A262" or GIDs like "0A3FCD24-0012-46FA-B085-88519E55EF29" and I have absolutely no idea how to translate these IDs back into something that could be matched back to the original IDs (UID 1072 and GID 1047 respectively in this example). Can anyone help me translate these weird long hex strings? (Basically we're moving from local users to an Active Directory setup, so I want to move all files to the new server with permissions intact, then chmod, chgrp and set ACLs such that we translate old IDs to the new AD IDs. Hence needing some way to map between the sets. I don't believe there's an easier way to do this?) Many thanks, Oliver.

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Exsisto
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • directory services group query changing randomly

    - by yamspog
    I am receiving an unusual behaviour in my asp.net application. I have code that uses Directory Services to find the AD groups for a given, authenticated user. The code goes something like ... string username = "user"; string domain = "LDAP://DC=domain,DC=com"; DirectorySearcher search = new DirectorySearcher(domain); search.Filter = "(SAMAccountName=" + username + ")"; And then I query and get the list of groups for the given user. The problem is that the code was receiving the list of groups as a list of strings. With our latest release of the software, we are starting to receive the list of groups as a byte[]. The system will return string, suddenly return byte[] and then with a reboot it returns string again. Anyone have any ideas? code sample: DirectoryEntry dirEntry = new DirectoryEntry("LDAP://" + ldapSearchBase); DirectorySearcher userSearcher = new DirectorySearcher(dirEntry) { SearchScope = SearchScope.Subtree, CacheResults = false, Filter = ("(" + txtLdapSearchNameFilter.Text + "=" + userName + ")") }; userResult = userSearcher.FindOne(); ResultPropertyValueCollection valCol = userResult.Properties["memberOf"]; foreach (object val in valCol) { if (val is string) { distName = val.ToString(); } else { distName = enc.GetString((Byte[])val); } }

    Read the article

  • Find and Replace several several different values all at once

    - by matt
    I have a file with multiple instances of Text_1 and Text1 and I need to replace both those strings with Text_A and TextB respectively. Currently I'm doing two Find and Replace functions on each file one that finds Text_1 and replaces it with Text_A and the other that finds Text1 and replaces it with TextB. Is there any way to do this all at once instead of having to run "Find and Replace" twice? I am using Dreamweaver CS3, but I also have Notepad++, regular Notepad, OO Writer, MS Word if those will be easier. Ideally I could do this in Dreamweaver or Notepad++ but I'm open to downloading something else to get the job done. I'd prefer not to have to do any command line stuff or create a batch file (while I'm aware of it, I don't understand it really). Edit: In case the above description isn't clear, let me explain it this way... I want to run Find & Replace 1 time in 1 document and I want it to do ALL of the following during that one Find & Replace instance: Find: Text_1 and Replace with: Text_A Find: Text1 and Replace with: TextB I am not trying to do a Find and Replace across several documents.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >