Search Results

Search found 14304 results on 573 pages for 'inside'.

Page 17/573 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • amazon dynamoDB or MySQL for storing large arrays inside each row

    - by Logan Besecker
    I am trying to decide which database I should use for an application I'm making. I was leaning toward dynamoDB because of its scalability, but then I read in the documentation which said: there is a limit of 64 KB on the item size although it looks like MySQL has a similar restriction documented here This application will be storing a lot of data in two arrays, which could contain upwards of 10,000-100,000 strings in each. I estimate that these strings will each be somewhere around 20 characters long, so each element of the array will be around 40bytes and each array could be around 4MB. Given this predicament, what database on amazon AWS would you use; or how would you get around the limit of size per row? Thanks in advance, Logan Besecker

    Read the article

  • Using Openfiler inside a virtualmachine and VmWare Fault Tolerance

    - by SoMoS
    Hello, currently I have 2 servers with Fault Tolerance working with another server with openfiler as a iSCSI server (looks like without that Fault Tolerance does not work). I would like to remove that server and put the openfiler distribution as another Fault Tolerance protected machine. Is this possible? This way i could save one server and also have faster HD access. Thanks in advance for your help.

    Read the article

  • Access folder right-click from inside the folder in Windows 7

    - by BrenBarn
    In Windows XP, if you had a folder open in Explorer, you could access the right-click context menu of a folder by right-clicking the folder icon in the titlebar of the Explorer window. In Win7 this no longer works. Right-clicking the background of the open folder, or right-clicking the folder icon in the info pane at the bottom, does not give the same context menu; there are items that are not in these menus, but are in the menu when I right-click on a folder from outside it. Given that I have a folder open in Explorer, how can I, without navigating out of the folder, access the same right-click context menu that I would get if I navigated to the parent folder and right-clicked my target folder?

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • Gmail sends bulk messages sent by postfix to spam - spf, rDNS are set up (headers inside)

    - by snitko
    here are the headers of the blocked messages (actual domain replaced with domain.com, ip address with n.n.n.n and gmail account name with person.account): Delivered-To: [email protected] Received: by 10.216.89.137 with SMTP id c9cs247685wef; Tue, 6 Dec 2011 16:06:37 -0800 (PST) Received: by 10.224.199.134 with SMTP id es6mr14447757qab.2.1323216395590; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Return-Path: <[email protected]> Received: from mail.domain.com (domain.com. [n.n.n.n]) by mx.google.com with ESMTP id b16si7471407qcv.131.2011.12.06.16.06.35; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) client-ip=n.n.n.n; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) [email protected] Received: by mail.domain.com (Postfix, from userid 5001) id 26ADE381E3; Tue, 6 Dec 2011 19:06:35 -0500 (EST) Received: from domain.com (domain.com [127.0.0.1]) by mail.domain.com (Postfix) with ESMTP id 0148638030 for <[email protected]>; Tue, 6 Dec 2011 19:06:31 -0500 (EST) Date: Tue, 06 Dec 2011 19:06:31 -0500 From: DomainApp <[email protected]> Reply-To: [email protected] To: [email protected] Message-ID: <[email protected]> Subject: Roman Snitko says hi Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-No-Spam: True Precedence: bulk List-Unsubscribe: [email protected] Messages go to Spam folder on various gmail accounts, so it's not a coincidence. I followed all gmail guides on sending bulk emails from here https://mail.google.com/support/bin/answer.py?hl=en&answer=81126. I also checked my ip-address here http://www.dnsblcheck.co.uk/ and it's NOT on the blacklists. Thus I have two questions: What may be the possible reason for the messages to go to Spam folder? Is there any way to contact Google and ask them what causes this? Update: I have set up openDKIM on my server, everything works, gmail message headers say that dkim=pass, which means everything is set up correctly. Messages still end up in Spam folder.

    Read the article

  • Mapping drives on MacOSX (Leopard/Snow Leopard) permanently inside LAN

    - by Shyam
    Hi, How do I "map" shared folders on a Mac, permanently? With map, I do not mean 'connect', but permanently add it to the system so it exists after reboot. Since workstations tend to shutdown, I wonder also the symptoms and cures in case that happens. In Linux, this can be done using the fstab file, but I noticed that volumes are mounted in a different structure than Linux. I need this to backup some workstations, doing a recursive job over a single directory, that should contain the shared folders. I use Terminal to access the main system, so by preference, the solution would be nice that works within a bash shell vs GUI. I can access all folders in Finder. Thanks!

    Read the article

  • How to use HTML markup tags inside Bash script

    - by CONtext
    I have crontab and a simple bash script that sends me emails every often containing PHP, NGINX, MYSQL errors from their log files. This is a simplified example. #/home/user/status.sh [email protected] PHP_ERROR=`tail -5 /var/log/php-fpm/error.log` NGINX_ERROR=`tail -5 /var/log/nginx/error.log` MYSQL_ERROR=`tail /var/log/mysqld.log` DISK_SPACE=`df -h` echo " Today's, server report:: ================================== DISK_SPACE: $DISK_SPACE --------------------------------- MEMORY_USAGE: $MEMORY_USAGE ----------------------------------- NGINX ERROR: $NGINX_ERROR ----------------------------------- PHP ERRORS: $PHP_ERROR ------------------------------------ MYSQL_ERRORS: $MYSQL_ERROR ------------------------------------- " | mail -s "Server reports" $EMAIL I know this is a very basic usage, but as you can see, I am trying to separate the errors, but not of the html tags including \n are working. So, my question is, is it possible to use HTML tags to format the text, if not .. then what are the alternatives.

    Read the article

  • Launch only the command if the previous one worked inside SSH, SHELL

    - by Namari
    I've got a SHELL script which is using a pipe to separate my 2 commands: ssh -oBatchMode=yes user@hostname "mysql -u yop -pyop -c yop | echo test" The problem is even if my connection to mysql doesn't work, it send the echo test. I would like to forbid my script to send any command if the previous command doesn't work. I search with a if condition but it seems it not possible with it :( Does anyone have an idea ? Thanks, Namari

    Read the article

  • See all output from commands performed inside screen

    - by user1032531
    I am using screen (http://www.gnu.org/software/screen/manual/screen.html) to access my minecraft console. I created a server in /etc/init.d, and have minecraft running in the background. Then, to access the minecraft console, I just type # screen -r in bash. I can now do commands in the screen shell. The problem is if I do some command which exports a bunch of text, it exceeds the size of the screen and pushes the begging output off the page. And I cannot seem to scroll up and see it. How can I scroll back and view all the output? How can I pause the output (maybe something like more or less)?

    Read the article

  • How to use iSCSI inside HyperV VM?

    - by William
    I have 2 Dell R710 servers (intended to set up HyperV cluster) and a MD3000i SAN set up: Server1/Server2: NIC 1: connected to company LAN NIC 2: crossover to the other server's NIC 2 NIC 3: crossover to iSCSI port of SAN controller 1 NIC 4: crossover to iSCSI port of SAN controller 2 I have both servers setup as diskless servers with iSCSI boot from SAN without problem. But how can I access iSCSI from within the VM such that I can set up clustering inbetween the VMs? I can ping from the host to the SAN but found that NIC3/4 cannot be used for virtual network in HyperV? What am I doing wrong?

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Very long DNS lookups inside my network

    - by Nuno Cordeiro
    Ever since I installed DD-WRT (v24-sp2 08/07/10 std-usb-ftp) on my router (RT-N16), my browsing got substantially slower. Using FirePHP I figured out that it's being caused by VERY long DNS lookups (~30 seconds). When the domain name was very recently accessed then speed is very good. I tried changing DNS on the computer and I tried messing around with the options on DD-WRT. I have tried to configure the router with Google DNS and/or OpenDNS. My current DNS output after using ipconfig -all is: 192.168.1.1 208.67.220.220 8.8.8.8 208.67.222.222 Can someone help me debug and solve this problem? I'd like to snoop the requests themselves. How can I know which DNS requests are being sent and which are failing/succeeding? Note: I don't expect this to be relevant but my router is connected to the internet through an ONT.

    Read the article

  • Resize Ubuntu Linux system to smaller disk inside VMware ESXi

    - by mlambie
    I have several Ubuntu Linux virtual machines running on VMware ESXi hosts that have all been allocated disks much larger than their required capacity. As space is now becoming an issue on our SAN, I'd like to investigate downsizing the allocated disk space on these machines. All systems will be completely backed up imaged before I begin making changes, and I will always retain a pristine backup in case the partition resizing does not work. Is there an easier way than the following procedure, or is their a better solution entirely? Shutdown and assign a second disk to the virtual machine Boot using the SystemRescueCD Use GParted to resize the original (source) partition, making it smaller Clone the new, smaller partition to the second disk Shutdown and remove initial disk from the virtual machine Reboot and force fsck to check the filesystem

    Read the article

  • Load login shell inside user cronjob

    - by sa125
    I'm trying to run a rake task using a scheduled cronjob. My crontab looks something like this: 0 1 * * 1-7 /bin/bash -l -c "cd ~/jobs/rake && rake reports:create >> ~/jobs/logs/cron.log" Ruby on my account is provided by RVM, which is loaded via ~/.bashrc (before the no-interaction check): # load RVM env [[ -s $HOME/.rvm/scripts/rvm ]] && source $HOME/.rvm/scripts/rvm # If not running interactively, don't do anything [ -z "$PS1" ] && return # ... rest of logic Time and again, this task fails to run since RVM isn't loaded when the task is called (uses system's /usr/bin/ruby instead), and gem dependencies are missing. How can I make crontab load my shell environment before executing my scheduled jobs? thanks.

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • How can I tag sequences inside videos?

    - by Antoine
    For example, let's say I find a sequence in a WWDC session (Apple videos) that explains a technical point very well. I would like to be able to store the exact range in the video, tag it with keywords, and manage all that information. And when I want to share this information, I could simply export to a plain English sentence: "Have a look at the video named …". It would be great to be able to launch a video player for local videos and go straight to the beginning of the sequence.

    Read the article

  • SSH access from outside to a pc inside network

    - by Raja
    I have a static IP and ADSL router linked to a linksys wireless router to which all my machines are connected. I would want to setup SVN on one of machines and provide SSH access which should be accessible by users outside my network. Would this be possible? Even just SVN access through web should be fine. Please let me know what all things should be done to achieve this ? I have Ubuntu VM running in a iMac Leopard machine and another 2 Win 7 32/64 bit machines. I can setup standalone Ubuntu or Win XP on another machine. Thanks, Raja.

    Read the article

  • Why does WebDAV fail from inside home network

    - by Claus
    On my OSX server there is a folder that is configured in the Server App to be accessible via WebDAV. This folder is used to sync OmniFocus. On my router, I have set up a dynamic dns. When I am outside my home network (physically away or when connected via a vpn), I can connect and sync fine via: https://<server name from dyndns>/<username>/<path to WebDAV folder> However, when I am in my home network, the connection to WebDAV does not work (other connections, AFP, f.ex, do work). What could be some reasons why I can't connect to WebDAV from within my home network? What log files could give hints and where are they stored? I am running OSX server 10.9.3. and server.app. Thanks for your help.

    Read the article

  • Bridging a Windows 7 and Ubuntu dual boot inside an OS

    - by matsko
    I have Windows 7 and Ubuntu installed on my local PC. They're both installed on separate partitions on the same machine, and when the computer boots up the user is given the option to choose which one they want to boot use as the OS. This all works fine, but I want to use Windows 7 instead of Ubuntu, I am required to restart the computer and boot up the other OS. Is is possible to use an "inline" tool that will allow to change between both OSs as if they were windows in Windows 7? Which tool would that be? Does anyone know of anything else than Parallels? Also are there any free tools that would do this?. Many Thanks.

    Read the article

  • Iptables based router inside KVM virtual machine

    - by Anton
    I have KVM virtual machine (CentOS 6.2 x64), it has 2 NIC: eth0 - real external IP 1.2.3.4 (simplified example instead of real one) eth1 - local internal IP 172.16.0.1 Now I'm trying to make port mapping 1.2.3.4:80 = 172.16.0.2:80 Current iptables rules: # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A PREROUTING -p tcp -m tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 172.16.0.2:80 COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 But there is nothing works, I mean it does not forwards that port. Similar configuration without virtualization seems to be working. What am I missing? Thanks!

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >