Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 319/588 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • SD to PCI adapter

    - by vy32
    We are looking for a way to plug a SD card into a PCI slot without going through a USB interface. We want to read the SD card's serial number directly and issue SD commands. The Chumby and the Google Android G1 phones both have SD slots that you can read from Linux without going through USB, but the Chumby only has a single SD card that's used for booting, and the G1 doesn't have any other reasonable storage if you use the SD card for that purpose. I'd really like a desktop with a few SD slots that are directly accessible. Anybody know of anything??

    Read the article

  • spring roo vs appfuse generate service /dao layer

    - by cometta
    I am looking for feedback from experienced users on spring roo and appfuse. Which do you think does a better job reverse engineering database tables and generating a service layer, dao layer, and jpa entities? If I am not mistaken, spring roo currently cannot reverse engineer a database.

    Read the article

  • Changing a column collation

    - by Stefan
    Hey there, I have a database already set up. I am trying to change the collation to case sensitive on my username column so it restricts login parameters to what they signed up with. However I keep getting this: #1025 - Error on rename of './yebutno_ybn/#sql-76dc_8581dc' to './yebutno_ybn/user' (errno: 150) there is foreign key constraints due to related tables.... any ideas? this will save me a lot of hassle with the php side of things! Thanks, Stefan

    Read the article

  • Modelling class realtions

    - by phenevo
    Hi, I have a few classes: Article: Content, ID, Magazine: Name Code, And 3 tables in database: Articles, Magazines and ArticlesinMagazines (two fields: IDArticle and CodeMagazine) In App, I've got module to manage Articles, and datagridview to relate their with magazines DataGridView has twofields: MagazineCode, IsPublished. The same article can be in many magazines (1:n) How would you implement on model ? Article have to has a field : List ?? I concern because Magazine associates articles

    Read the article

  • In LINQPad can you access SYSOBJECTS using LINQ?

    - by Xanthalas
    In LINQPad is there any way to access either the SYSOBJECTS table or the various INFORMATION_SCHEMA.xxx views using LINQ? I spend a lot of time searching through our huge company database for partial names as there are too many tables and Stored Procedures to remember the names of them all. I know I can enter and run SQL in LINQPad but I would like to do this in LINQ instead of SQL as LINQ is more fun :) Thanks Xanthalas

    Read the article

  • Directory name for non-generic Proprietary stuff

    - by George Bailey
    Is there a common or standard directory name for the company-specific stuff that exists in a server? This would include any crons, scripts, webserver docroots, programs, non-database storage areas, service codebases, etc. We could of course put crons in /etc/cron.d, put docroots in /home/webservd, scripts in one of the bin directories, but that would be messy. If XYZ Technology Corp wanted to have all the non-generic stuff in one place, would they make a directory /xyz or /home/xyz or is there an alternative directory name that is not company-specific, but intended for company-specific stuff? What is most common?

    Read the article

  • mysql never releases memory

    - by Ishu
    I have a production server clocking about 4 million page views per month. The server has got 8GB of RAM and mysql acts as a database. I am facing problems in handling mysql to take this load. I need to restart mysql twice a day to handle this thing. The problem with mysql is that it starts with some particular occupation, the memory consumed by mysql keeps on increasing untill it reaches the maximum it can consume and then mysql stops responding slowly or does not respond at all, which freezes the server. All my tables are indexed properly and there are no long queries. I need some one to help on how to go about debugging what to do here. All my tables are myisam. I have tried configuring the parameters key_buffer etc but to no rescue. Any sort of help is greatly appreciated. Here are some parameters which may help. mysql --version mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 mysql> show variables; +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | NO | | have_bdb | YES | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | NO | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | | init_connect | | | init_file | | | init_slave | | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 2621440000 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | ON | | log_warnings | 1 | | long_query_time | 8 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 8388608 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 400 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 4294967295 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 16777216 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2000 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/run/mysqld/mysqld.pid | | plugin_dir | | | port | 3306 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 134217728 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 2097152 | | read_only | OFF | | read_rnd_buffer_size | 8388608 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 1 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /var/lib/mysql/mysql.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 8 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77-log | | version_bdb | Sleepycat Software: Berkeley DB 4.1.24: (January 29, 2009) | | version_comment | Source distribution | | version_compile_machine | i686 | | version_compile_os | redhat-linux-gnu | | wait_timeout | 28800 | +---------------------------------+------------------------------------------------------------+

    Read the article

  • NFS and KVM. Slow Speed

    - by Javier Martinez
    I have a KVM virtualization in Debian with 2 guests (Debian and Windows 2008). I want to have a 'mount point' shared that can be accessed by the 3 system (host and 2 guests) at the same time. So the only thing that I found was a NFS/SMB network storage. I picked NFS Due to my Ethernet network (10/100), the speed average that I get between accessing/transfering files between the 3 system is always 8~10MB/s. The point is if is there any chance of get a boost system for sharing files between 3 system (at the same time) without wasting the speed of my SATA disks. I mean, without the Ethernet limitation of 10 MB/s

    Read the article

  • SQLDatasource CommandTimeout not working

    - by Cedric Aube
    Good day, I'm using a SQLDataSource with a dynamic query generated c#, based on user choices in many fields. However, since our tables are very large, sometimes, I get a command timeout exception. I tried to set the property in 'Selecting' of the SqlDataSource like so: protected void SqlDataSource_PSearch_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.CommandTimeout = 900; } but with not luck, like if this attribute was ignored. .NET 2.0, Sql server 2005. Any idea?

    Read the article

  • KVM slow guest i/o

    - by Akarot
    Host: Debian 6.0 (squeeze) with qemu-kvm and libvirt from squeeze-backports ii qemu-kvm 1.0+dfsg-8~bpo60+1 ii libvirt-bin 0.9.8-2~bpo60+2 Has 3TB sata drives with software raid and lvm. It has a sequential write speed of ~140MB/s measured with dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync Elevator set to cfq Guest Debian 6.0 (squeeze) Uses LVM as storage. Drivers are virtio and cache='none' Sequential write speed is considerably slower with only 25-50MB/s Elevator set to noop I'm kind of running out of ideas for further tweaks but I'm sure that I/O speed should be much faster because many people are reporting almost native performance with lvm.

    Read the article

  • jQuery unbinding click event when maximum number of children are displayed

    - by RyanP13
    I have a personal details form that alows you to enter a certain number of dependants which is determined by the JSP application. The first dependant is visible and the user has the option to add dependants up to the maximum number. All other dependants are hidden by default and are displayed when a user clicks the 'Add another dependant button'. When the maximum number of dependants has been reached the button is greyed out and a message is generated via jQuery and displayed to tell the user exactly this. The issue i am having is when the maximum number of dependants has been reached the message is displayed but then the user can click the button to add more dependants and the message keeps on generating. I thought unbinding the click event would sort this but it seems to still be able to generate a second message. Here is the function i wrote to generate the message: // Dependant message function function maxDependMsg(msgElement) { // number of children can change per product, needs to be dynamic // count number of dependants in HTML var $dependLength = $("div.dependant").length; // add class maxAdd to grey out Button // create maximum dependants message and display, will not be created if JS turned off $(msgElement) .addClass("maxAdd") .after($('<p>') .addClass("maxMsg") .append("The selected web policy does not offer cover for more than " + $dependLength + " children, please contact our customer advisers if you wish discuss alternative policies available.")); } There is a hyperlink with a click event attached like so: $("a.add").click(function(){ // Show the next hidden table on clicking add child button $(this).closest('form').find('div.dependant:hidden:first').show(); // Get the number of hidden tables var $hiddenChildren = $('div.dependant:hidden').length; if ($hiddenChildren == 0) { // save visible state of system message $.cookies.set('cpqbMaxDependantMsg', 'visible'); // show system message that you can't add anymore dependants than what is on page maxDependMsg("a.add"); $(this).unbind("click"); } // set a cookie for the visible state of all child tables $('div.dependant').each(function(){ var $childCount = $(this).index('div.dependant'); if ($(this).is(':visible')) { $.cookies.set('cpqbTableStatus' + $childCount, 'visible'); } else { $.cookies.set('cpqbTableStatus' + $childCount, 'hidden'); } }); return false; }); All of the cookies code is for state saving when users are going back and forward through the process.

    Read the article

  • Can I have different ESX hosts accessing the same LUN over different protocols?

    - by Kevin Kuphal
    I currently have a cluster of two ESX 3.5U2 servers connected directly via FiberChannel to a NetApp 3020 cluster. These hosts mount four VMFS LUNs for virtual machine storage. Currently these LUNs are only made available via our FiberChannel initator in the Netapp configuration If I were to add an ESXi host to the cluster for internal IT use can I: Make the same VMFS LUNs available via the iSCSI initiator on the Netapp Connect this ESXi host to those LUNs via iSCSI Do all of this while the existing two ESX hosts are connected to those LUNs via FiberChannel Does anyone have experience with this type of mixed protocol environment, specifically with Netapp?

    Read the article

  • Wireless Network Issue, Disconnecting Randomly From Network

    - by Surfer513
    I'm having an odd problem with my wireless network. Here is the background information: Server (Windows Server 2008) 1 to 10 end user machines connecting to the network Layer 3 Access Point (Asus WL-330 gE) connected to ethernet of Server and all machines connect to the network via the AP The end user machines get a connection to the server with no problems initially. But then connections are randomly lost throughout the day to the server/network. The wireless NICs of the machines still see the wireless network but are unable to connect to it. Then after some time the connection is regained automatically. I initially thought there was a problem with this particular AP, but then I took the same make/model AP out of storage and still ran into the problem. Any ideas what could be causing this??? Very confusing that the wireless nics on the end user machines can still see the network but not connect, and that the connections are randomly lost/gained. Thanks in advance!

    Read the article

  • Retrieve sequence of data from different columns.

    - by janetsmith
    Let's say I have a table containing following data: | id | t0 | t1 | t2 | | 1 | 4 | 5 | 6 | | 2 | 3 | 3 | 2 | | 3 | 6 | 4 | 5 | | 4 | 4 | 5 | 5 | I want to retrieve all the rows containing 4, 5, 6 (regardless the position of numbers in the tables), so row 1 & row 3 will be selected. How to do that with SQL query?

    Read the article

  • Best way to automatically synchronize files between Linux and Windows

    - by Gregory
    My first choice was rsync but it caused some issues and is too manual. My second choice, currently under evaluation is Unison. Are there any other good options for bi-directional auto-syncing? The synching tool cannot add it's own files to the directories to be synched. Which removes CVS/SVN as a choice. Plus they are too manual. The requirements are user-level program on both sides, no root account access available. Only scanning on linux. On windows it could be a virtual drive/path. Very fast and efficient like rsync. Some other requirements include: machines are not on the same network, files cannot fall into the wrong hands, nor can they be handled by 3rd parties, this pretty much excludes all online storage sites.

    Read the article

  • Any suggestions on how to extract 6 million records from an oracle10g ?

    - by R K
    I just want to give you a little background Need to write a PL-SQL which will extract 6 million record joining different tables and create a file of that. Need more suggestions, specifically on how to fetch these many records. As fetching these million of records on a single go can be a highly resource intensive. So question is how to fetch these many records ? Any pl-sql will be highly appreciated.

    Read the article

  • XenServer 5.5 Error adding additional Server to Ressource Pool

    - by SideShowCoder
    I'm running Citrix XenServer 5.5 as a testsetup, with Openfiler providing Storage via NFS. I tried to setup a Ressource Pool to test Live migration but I'm unable to a my 2. Server to the Pool. It fails after about 10sec with the Error: 4/26/2010 2:54:52 PM Error: Adding server 'u-173-c047.XXX.XXX' to pool 'Portland' - Internal error: Stunnel.Stunnel_error("") I'm kind of lost right now where to look whats causing this, and the Error is not really of any help. Are there logs availible somewhere besides in XenCenter, which might be helpful? Any Ideas what is causing this? Thanks

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • Using MongoDB with Ruby On Rails and the Mongomapper plugin

    - by Micke
    Hello, i am currently trying to learn Ruby On Rails as i am a long-time PHP developer so i am building my own community like page. I have came pritty far and have made the user models and suchs using MySQL. But then i heard of MongoDB and looked in to it a little bit more and i find it kinda nice. So i have set it up and i am using mongomapper for the connection between rails and MongoDB. And i am now using it for the News page on the site. I also have a profile page for every User which includes their own guestbook so other users can come to their profile and write a little message to them. My thought now is to change the User models from using MySQL to start using MongoDB. I can start by showing how the models for each User is set up. The user model: class User < ActiveRecord::Base has_one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook < ActiveRecord::Base belongs_to :user has_many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts < ActiveRecord::Base belongs_to :guestbook, :class_name => "User::Guestbook" I have divided it like this for my own convenience but now when i am going to try to migrate to MongoDB i dont know how to make the tables. I would like to have one table for each user and in that table a "column" for all the guestbook entries since MongoDB can have a EmbeddedDocument. I would like to do this so i just have one Table for each user and not like now when i have three tables just to be able to have a guestbook. So my thought is to have it like this: The user model: class User include MongoMapper::Document one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook include MongoMapper::EmbeddedDocument belongs_to :user many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts include MongoMapper::EmbeddedDocument belongs_to :guestbook, :class_name => "User::Guestbook" But then i can think of one problem.. That when i just want to fetch the user information like a nickname and a birthdate then it will have to fetch all the users guestbook posts. And if each user has like a thousand posts in the guestbook it will get really much to fetch for the system. Or am i wrong? Do you think i should do it any other way? Thanks in advance and sorry if i am hard to understand but i am not so educated in the english language :)

    Read the article

  • Can you use a USB hard drive in ESXI?

    - by semi
    I know that you can install ESXi 4.0 on a thumbdrive, but I was wondering if you could plug in an external harddrive to give extra storage to one of your VMs? We run a fileserver inside of ESXi that needs a space upgrade, but we're thinking of migrating to a different fileserver solution and would rather stick to external media to ease the later transition. edit: Ideally I'd like the drive to show up to the VM directly and not have ESXi control it, so that I could move it to a different machine and still have all of the data appear the same.

    Read the article

  • jBASE 4.1 Database Noobster Questions

    - by Steve Johnson
    I am a software developer with devlopment experience in C#, C++ .Net alongwith SQL Server 2005/08, Oracle and mysql. But somehow i dont get jBASE to work at Windows XP SP3 machine. My goal is setup user accounts, create database on a JBASE ainstallation, authenticate and backup/restore few table via a C++ program. And i dont need to do it with builtin backup/restore tools of jBASE. I am able to install jBASe 4.1 aling with all its accessories on my WINXPSP3 machine. I was able to run the jSlimserver and TEMENOUS server along with licnesing server. I was able to add the license key as well. But after that what i was supposed to do? i have no idea about it. The docs and online help doesnt answer a simple question of how to create a database! The google search results from the jbase site all go to the 404 Pages! Can a jBASE expert guide to the following steps: Create a jBASE database. Create users Authenticate via those users Connect to database Create tables and insert data. Connect via a C++ or C# program to connect to jBASE DB and backup/restore tables. I know that this is too much too ask but i dont get to get the JBASE system. I cant get it to work on my System somehow. Btw, jdc and jexloree doesnt seem to do anything. I have checked that enironmental variables for jBASE are setup correctly and i have verified them. There are no extra JRE or JDK installations on my system. Besides all that, only licensing client, slim server and temenous server seem to run and listen for connections and no other execuatable ever seems to work. A simple tutorial to achieve the objective will be highly appreciated. Also if anyone can point out the mistake that i have done or anything i might need to check, then please do so. I will be highly encouraged and obliged. Thanks Steve

    Read the article

  • SQL is this equvelent to a LEFT JoIn?

    - by Jim
    Is this equvelent to a LEFT JOIN? select distinct a.name, b.name from tableA a, (select name from tableB) as b It seems as though there is no link between the two tables. Is there an easier / more efficient way to write this?

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >