Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 205/588 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Can't access individual samba shares

    - by Richard Maddis
    I've just installed CentOS and I'm configuring Samba. I have a share with the following in the smb.conf file: [storage] comment = Main storage for all use path = /share public = yes browseable = yes writable = yes printable = no write list = bob root create mask = 0775 guest ok = yes available = yes In Windows Explorer, I can reach the page listing all the shares on the server, but I click on the shares themselves, I get an error saying that the folder cannot be found. I have verified that the folder /share exists and I've also given it 777 permissions so it cannot be due to permissions. What is causing this? I can post more config files if necessary.

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • Perl DBI : How to get schemas

    - by karthi-27
    Hi all,I have using Perl DBI .In that $dbase-tables() will return all the tables in the corresponding database .Like this I want to know the schema's available in the database .Is there any function available for that .

    Read the article

  • SQL Comments on Create Table on SQL Server 2008

    - by user494901
    I need to create some pretty big tables in SQL server 2008, while I do have SQL Server Manager Studio, I would like to comment the tables and the columns when I create the table. How do I do this? Example of the query I am running: CREATE TABLE cert_Certifications ( certificationID int PRIMARY KEY IDENTITY, profileID int, cprAdultExp datetime null ) I've tried COMMENT'Expiration Date for the Adult CPR' and COMMENT='Expiration Date for the Adult CPR' after the data type, and SQL server is giving me an error.

    Read the article

  • Do I have to chmod 777 my NFS folder when I share?

    - by luckytaxi
    Under Redhat, if I export a folder as an NFS mount, does the folder have to have RW for users/groups/others? Right now /storage/software is -rwxr-xr-x root/root i.e. /etc/exportfs /storage/software *(rw,sync) On my client, I can mount but I can't write. I'm using a regular user and NOT root. I think "no_root_squash" fixes it but I really don't want that. Then again, nor do I want to have to chmod 777 the folder on the server.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • "Always available offline" option missing on one network drive in Windows 7

    - by Rynardt
    My network setup at home has 2 network storage devices on the network. The one is for media content on a Popcorn Hour A-110 and the other is a D-Link DNS-320 in RAID 1 configuration for business files. When I access these network drives and right click a folder the following context menu appears for the A-110 device, but not for the D-link. I have tested this in both Windows 7 32bit and Windows Vista 64bit. In both instances the "Always available offline" option is only available for the A-110 storage device, and not for the D-link. How do I get this option for the D-link? Any advice or ideas are welcome.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • How to Update the primary key of table which is referenced as foreign key in another table?

    - by Mobin
    Suppose a Table "Person" having columns "SSN","Name","Address" and another Table "Contacts" having "Contact_ID","Contact_Type","SSN"(primary key of Person) similarly Table "Records" having "Record_ID","Record_Type","SSN"(primary key of Person) Now i want that when i change or update SSN in person table that accordingly changes in other 2 tables. If anyone can help me with a trigger for that Or how to pass foreign key constraints for tables

    Read the article

  • MSQL upgrade on Ubuntu - any heads ups?

    - by Rob Sedge
    I am needing to upgrade MYSQL on Ubuntu, it is a production server and naturally cautious. My many googles look to be essentially saying that I need to : 1) Backup my current mysql database and tables/data 2) Uninstall current mysql 3) Install new MYSQL 5+ 4) Restore Databases/ tables and data 5) Hope and Pray I got it right ?? Something doesn't seem right, sounds like a lot of down time and risk Am I missing something / or any simple solutions? Upgrading from mSQL 4 to 5 on Ubuntu 10 Many Thanks, Rob

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • Hbase and 1- Many Relation

    - by Shuja
    Hi All I have one question which can be best described by the following scenario. Suppose I have three tables BaseCategory,Category and products. If i am thinking in terms of RDBMS then the relationship amoung these tables are 1- One BaseCategory has Many categories 2- One Category has Many Products. Now i am thinking to convert it into HBase. can anybody help me how to map these relations into HBase?

    Read the article

  • Best Solution For My Requirements

    - by Eray
    Hello, I'm a web developer. I have a few small online web application and a few Wordpress blogs. But i don't have too much experience with installing / configuring web servers . One of my web application needs cron jobs. It will check a lot of web sites availability. And, this application will leech too much RAM. And i think shared-hosting isn't suitable for this. But 1GB storage is enough i think. I don't need too much storage for my web sites. What do you think ? Which hosting solution is more suitable for my requirements ? Reseller ? VPS ? Cloud Server ? etc ...

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • GridView in ASP.NET 2.0

    - by Subho
    How can I populate an editable Grid with data from different tables from MSSQL Server'05 writing a code behind function??? I have used: Dim conn As New SqlConnection(conn_web) Dim objCmd As New SqlDataAdapter(sql, conn) Dim oDS As New DataSet objCmd.Fill(oDS, "TAB") Dim dt As DataTable = oDS.Tables(0) Dim rowCount As Integer = dt.Rows.Count Dim dr As DataRow = dt.NewRow() If rowCount = 0 Then e.DataSource = Nothing e.DataBind() e.Focus() Else e.DataSource = dt e.DataBind() End If

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • MySQL database query returns empty result

    - by user1791096
    I am doing a data migration and getting empty result of simple query with one join. Following is the query Select * from users u INNER JOIN temp_users tu ON tu.uid = u.uid There hundreds of records which have same uid in both tables, but this query returns only one record. Following is the structure of tables users table uid: varchar(50) utf8_general_ci Yes NULL temp_users table uid: varchar(50) utf8_general_ci Yes NULL Is there anyone who faced same problem?

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >