Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 173/864 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • ssh permission denied

    - by Gitmo
    I am trying to ssh into a remote machine and I get the following debug messages: debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.x.xx [xxx.xxx.xx.x] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/hadoop/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/hadoop/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 128/256 debug2: bits set: 511/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/hadoop/.ssh/known_hosts debug3: check_host_in_hostfile: match line 20 debug1: Host '192.168.1.63' is known and matches the RSA host key. debug1: Found key in /home/hadoop/.ssh/known_hosts:20 debug2: bits set: 511/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/hadoop/.ssh/id_rsa (0x241c110) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /home/hadoop/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,password). What seems to be the problem?? I have tried everything, this is driving me nuts.

    Read the article

  • storing a sorted array of hashes

    - by srk
    use strict; use warnings; my @aoh =( { 3 => 15, 4 => 8, 5 => 9, }, { 3 => 11, 4 => 25, 5 => 6, }, { 3 => 5, 4 => 18, 5 => 5, }, { 0 => 16, 1 => 11, 2 => 7, }, { 0 => 21, 1 => 13, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); #declaring a new array to store the sorted hashes my @new; print "\n-------------expected output------------\n"; foreach my $href (@aoh) { #i want a new array of hashes where the hashes are sorted my %newhash; my @sorted_keys = sort {$href->{$b} <=> $href->{$a} || $b <=> $a} keys %$href; foreach my $key (@sorted_keys) { print "$key => $href->{$key}\n"; $newhash{$key} = $href->{$key}; } print "\n"; push(@new,\%newhash); } print "-----------output i am getting---------------\n"; foreach my $ref(@new) { my @skeys = sort {$ref->{$a} <=> $ref->{$b} } keys %$ref; foreach my $key (@skeys) { print "$key => $ref->{$key}\n" } print "\n"; } The output of the program : -------------expected output------------ 3 => 15 5 => 9 4 => 8 4 => 25 3 => 11 5 => 6 4 => 18 5 => 5 3 => 5 0 => 16 1 => 11 2 => 7 2 => 31 0 => 21 1 => 13 2 => 31 1 => 14 0 => 11 -----------output i am getting--------------- 4 => 8 5 => 9 3 => 15 5 => 6 3 => 11 4 => 25 3 => 5 5 => 5 4 => 18 2 => 7 1 => 11 0 => 16 1 => 13 0 => 21 2 => 31 0 => 11 1 => 14 2 => 31 Please tell me what am i doing wrong in storing the hashes into a new array.. how do i achieve what i want.. ? Thanks in advance...

    Read the article

  • How can I keep a hash sorted?

    - by srk
    use strict; use warnings; my @aoh =( { 3 => 15, 4 => 8, 5 => 9, }, { 3 => 11, 4 => 25, 5 => 6, }, { 3 => 5, 4 => 18, 5 => 5, }, { 0 => 16, 1 => 11, 2 => 7, }, { 0 => 21, 1 => 13, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); #declaring a new array to store the sorted hashes my @new; print "\n-------------expected output------------\n"; foreach my $href (@aoh) { #i want a new array of hashes where the hashes are sorted my %newhash; my @sorted_keys = sort {$href->{$b} <=> $href->{$a} || $b <=> $a} keys %$href; foreach my $key (@sorted_keys) { print "$key => $href->{$key}\n"; $newhash{$key} = $href->{$key}; } print "\n"; push(@new,\%newhash); } print "-----------output i am getting---------------\n"; foreach my $ref(@new) { my @skeys = skeys %$ref; foreach my $key (@skeys) { print "$key => $ref->{$key}\n" } print "\n"; } The output of the program : -------------expected output------------ 3 => 15 5 => 9 4 => 8 4 => 25 3 => 11 5 => 6 4 => 18 5 => 5 3 => 5 0 => 16 1 => 11 2 => 7 2 => 31 0 => 21 1 => 13 2 => 31 1 => 14 0 => 11 -----------output i am getting--------------- 4 => 8 3 => 15 5 => 9 4 => 25 3 => 11 5 => 6 4 => 18 3 => 5 5 => 5 1 => 11 0 => 16 2 => 7 1 => 13 0 => 21 2 => 31 1 => 14 0 => 11 2 => 31 Please tell me what am i doing wrong in storing the hashes into a new array.. how do i achieve what i want.. ? Thanks in advance...

    Read the article

  • validating cascading dropdownlist

    - by shruti
    i am working on MVC.Net. in that i have used cascading dropdownlist. I want to do validations for blank field. the view page coding is: Select Category: <%= Html.DropDownList("Makes", ViewData["Makes"] as SelectList,"Select Category")% Select Subcategory: <%= Html.CascadingDropDownList("Models", "Makes")% the code on controller: public ActionResult AddSubCategoryPage() { var makeList = new SelectList(entityObj.Category.ToList(), "Category_id", "Category_name"); ViewData["Makes"] = makeList; // Create Models view data var modelList = new CascadingSelectList(entityObj.Subcategory1.ToList(), "Category_id", "Subcategory_id", "Subcategory_name"); ViewData["Models"] = modelList; return View("AddSubCategoryPage"); } and for that i have made one class: public static class JavaScriptExtensions { public static string CascadingDropDownList(this HtmlHelper helper, string name, string associatedDropDownList) { var sb = new StringBuilder(); // render select tag sb.AppendFormat("<select name='{0}' id='{0}'></select>", name); sb.AppendLine(); // render data array sb.AppendLine("<script type='text/javascript'>"); var data = (CascadingSelectList)helper.ViewDataContainer.ViewData[name]; var listItems = data.GetListItems(); var colArray = new List<string>(); foreach (var item in listItems) colArray.Add(String.Format("{{key:'{0}',value:'{1}',text:'{2}'}}", item.Key, item.Value, item.Text)); var jsArray = String.Join(",", colArray.ToArray()); sb.AppendFormat("$get('{0}').allOptions=[{1}];", name, jsArray); sb.AppendLine(); sb.AppendFormat("$addHandler($get('{0}'), 'change', Function.createCallback(bindDropDownList, $get('{1}')));", associatedDropDownList, name); sb.AppendLine(); sb.AppendLine("</script>"); return sb.ToString(); } } public class CascadingSelectList { private IEnumerable _items; private string _dataKeyField; private string _dataValueField; private string _dataTextField; public CascadingSelectList(IEnumerable items, string dataKeyField, string dataValueField, string dataTextField) { _items = items; _dataKeyField = dataKeyField; _dataValueField = dataValueField; _dataTextField = dataTextField; } public List<CascadingListItem> GetListItems() { var listItems = new List<CascadingListItem>(); foreach (var item in _items) { var key = DataBinder.GetPropertyValue(item, _dataKeyField).ToString(); var value = DataBinder.GetPropertyValue(item, _dataValueField).ToString(); var text = DataBinder.GetPropertyValue(item, _dataTextField).ToString(); listItems.Add(new CascadingListItem(key, value, text)); } return listItems; } } public class CascadingListItem { public CascadingListItem(string key, string value, string text) { this.Key = key; this.Value = value; this.Text = text; } public string Key { get; set; } public string Value { get; set; } public string Text { get; set; } } but when i run the aaplication it gives me following error: Server Error in '/' Application. The parameters dictionary contains a null entry for parameter 'Models' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ActionResult AddSubCategoryPage(Int32, System.String, System.String)' in 'CMS.Controllers.HomeController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Parameter name: parameters . plz help me.

    Read the article

  • AES BYTE SYSTOLIC ARCHITECTURE.

    - by anum
    we are implementing AES BYTE SYSTOLIC ARCHITECTURE. CODE:- module key_expansion(kld,clk,key,key_expand,en); input kld,clk,en; input [127:0] key; wire [31:0] w0,w1,w2,w3; output [127:0] key_expand; reg[127:0] key_expand; reg [31:0] w[3:0]; reg [3:0] ctr; //reg [31:0] w0,w1,w2,w3; wire [31:0] c0,c1,c2,c3; wire [31:0] tmp_w; wire [31:0] subword; wire [31:0] rcon; assign w0 = w[0]; assign w1 = w[1]; assign w2 = w[2]; assign w3 = w[3]; //always @(posedge clk) always @(posedge clk) begin w[0] <= #1 kld ? key[127:096] : w[0]^subword^rcon; end always @(posedge clk) begin w[1] <= #1 kld ? key[095:064] : w[0]^w[1]^subword^rcon; end always @(posedge clk) begin w[2] <= #1 kld ? key[063:032] : w[0]^w[2]^w[1]^subword^rcon; end always @(posedge clk) begin w[3] <= #1 kld ? key[031:000] : w[0]^w[3]^w[2]^w[1]^subword^rcon; end assign tmp_w = w[3]; aes_sbox u0( .a(tmp_w[23:16]), .d(subword[31:24])); aes_sbox u1( .a(tmp_w[15:08]), .d(subword[23:16])); aes_sbox u2( .a(tmp_w[07:00]), .d(subword[15:08])); aes_sbox u3( .a(tmp_w[31:24]), .d(subword[07:00])); aes_rcon r0( .clk(clk), .kld(kld), .out_rcon(rcon)); //assign key_expand={w0,w1,w2,w3}; //assign key_expand={w0,w1,w2,w3}; always@(posedge clk) begin if (!en) begin ctr<=0; end else if (|ctr) begin key_expand<=0; ctr<=(ctr+1)%16; end else if (!(|ctr)) begin key_expand<={w0,w1,w2,w3}; ctr<=(ctr+1)%16; end end endmodule problem:verilog code has been attached THE BASIC problem is that we want to generate a new key after 16 clock cycles.whereas initially it would generate a new key every posedge of clock.in order to stop the value from being assigned to w[0] w[1] w[2] w[3] we implemented an enable counter logic as under.it has enabled us to give output in key_expand after 16 cycles but the value of required keys has bin changed.because the key_expand takes up the latest value from w[0],w[1],w[2],w[3] where as we require the first value generated.. we should block the value to be assigned to w[0] to w[3] somehow ..but we are stuck.plz help.

    Read the article

  • can anyone help how to get data from a plist, precisely inside the array

    - by jix
    Can anyone help me with getting data from this plist? I'm having trouble accessing the values of the three objects in the plist. i can see all the list of countries in my tableView, but i can't see the prices when i tap on a cell . any help please thanks MY PLIST <plist version="1.0"> <dict> <key>Afghanistan 3</key> <array> <string>RC $1.65</string> <string>CC $2.36</string> <string>EC 0</string> </array> <key>Albania 1</key> <array> <string>RC FREE</string> <string>CC $1.01</string> </array> <key>Algeria 2</key> <array> <string>RC $0.27</string> <string>CC $0.85</string> </array> <key>Andorra 2</key> <array> <string>RC FREE</string> <string>CC $0.93</string> also my code that i have implemented in xcode 4.5 . cc is the calling rate that is in item 0 in the plist rc is the receiving rate that is in item 1 in the plist ec is the extra rate that is in item 2 in the plist how can i see the cc ,rc, & ec each in a label when i click the cell in the next view controller ? MY CODE NSString *ratesFile = [[NSBundle mainBundle] pathForResource:@"rates" ofType:@"plist"]; rates = [[NSDictionary alloc]initWithContentsOfFile:ratesFile]; NSArray * dictionaryKeys = [rates allKeys]; name = [dictionaryKeys sortedArrayUsingSelector:@selector(compare:)]; cc = [rates objectForKey:@"Item 0"]; rc = [rates objectForKey:@"Item 1"]; ec = [rates objectForKey:@"Item 2"]; - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [rates count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } NSString *countryName = [name objectAtIndex:indexPath.row]; cell.textLabel.text = countryName; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *ccRate = [cc objectAtIndex:indexPath.row]; if (!self.detailViewController) { self.detailViewController = [[DetailViewController alloc] initWithNibName:@"DetailViewController" bundle:nil]; } self.detailViewController.detailItem = ccRate; [self.navigationController pushViewController:self.detailViewController animated:YES]; } thank in advance

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • WPF - List View Row Index and Validation

    - by abhishek
    Hi, I have a ListView with TextBoxes in second column. I want to validate that my text box does not contain a number if the third column(data_type) is "Text". I am unable to do the validation. I tried a few approaches. In one approach I try to handle the MouseDown event and am trying to get the Row number so that I can get the data_type value of that row. I want to us this value in the Validate method. I have been struggling for a week now. Would appreciate if anybody could help. <ControlTemplate x:Key="validationTemplate"> <DockPanel> <TextBlock Foreground="Red" FontSize="20">!</TextBlock> <AdornedElementPlaceholder/> </DockPanel> </ControlTemplate> <Style x:Key="textBoxInError" TargetType="{x:Type TextBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={x:Static RelativeSource.Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> <DataTemplate x:Key="textTemplate"> <TextBox HorizontalAlignment= "Stretch" IsEnabled="{Binding XPath=./@isenabled}" Validation.ErrorTemplate="{StaticResource validationTemplate}" Style="{StaticResource textBoxInError}"> <TextBox.Text> <Binding XPath="./@value" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <local:TextBoxMinMaxValidation> <local:TextBoxMinMaxValidation.DataType> <local:DataTypeCheck Datatype="{Binding Source={StaticResource dataProvider}, XPath='/[@id=CustomerServiceQueueName]'}"/> </local:TextBoxMinMaxValidation.DataType> <local:TextBoxMinMaxValidation.ValidRange> <local:Int32RangeChecker Minimum="{Binding Source={StaticResource dataProvider}, XPath=./@min}" Maximum="{Binding Source={StaticResource dataProvider}, XPath=./@max}"/> </local:TextBoxMinMaxValidation.ValidRange> </local:TextBoxMinMaxValidation> </Binding.ValidationRules> </Binding > </TextBox.Text> </TextBox> </DataTemplate> <DataTemplate x:Key="dropDownTemplate"> <ComboBox Name="cmbBox" HorizontalAlignment="Stretch" SelectedIndex="{Binding XPath=./@value}" ItemsSource="{Binding XPath=.//OPTION/@value}" IsEnabled="{Binding XPath=./@isenabled}" /> </DataTemplate> <DataTemplate x:Key="booldropDownTemplate"> <ComboBox Name="cmbBox" HorizontalAlignment="Stretch" SelectedIndex="{Binding XPath=./@value, Converter={StaticResource boolconvert}}"> <ComboBoxItem>True</ComboBoxItem> <ComboBoxItem>False</ComboBoxItem> </ComboBox> </DataTemplate> <local:ControlTemplateSelector x:Key="myControlTemplateSelector"/> <Style x:Key="StretchedContainerStyle" TargetType="{x:Type ListViewItem}"> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <Setter Property="Template" Value="{DynamicResource ListBoxItemControlTemplate1}"/> </Style> <ControlTemplate x:Key="ListBoxItemControlTemplate1" TargetType="{x:Type ListBoxItem}"> <Border SnapsToDevicePixels="true" x:Name="Bd" Background="{TemplateBinding Background}" BorderBrush="{DynamicResource {x:Static SystemColors.ActiveBorderBrushKey}}" Padding="{TemplateBinding Padding}" BorderThickness="0,0.5,0,0.5"> <GridViewRowPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> </Border> </ControlTemplate> <Style x:Key="CustomHeaderStyle" TargetType="{x:Type GridViewColumnHeader}"> <Setter Property="Background" Value="LightGray" /> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="FontFamily" Value="Arial"/> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="Padding" Value="2,0,2,0"/> </Style> </UserControl.Resources> <Grid x:Name="GridViewControl" Height="Auto"> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="34"/> </Grid.RowDefinitions> <ListView x:Name="ListViewControl" Grid.Row="0" ItemContainerStyle="{DynamicResource StretchedContainerStyle}" ItemTemplateSelector="{DynamicResource myControlTemplateSelector}" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource dataProvider}, XPath=//CONFIGURATION}"> <ListView.View > <GridView > <GridViewColumn Header="ID" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@id}"/> <GridViewColumn Header="VALUE" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" CellTemplateSelector="{DynamicResource myControlTemplateSelector}" /> <GridViewColumn Header="DATATYPE" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@data_type}"/> <GridViewColumn Header="DESCRIPTION" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@description}" Width="{Binding ElementName=ListViewControl, Path=ActualWidth}"/> </GridView> </ListView.View> </ListView> <StackPanel Grid.Row="1"> <Button Grid.Row="1" HorizontalAlignment="Stretch" Height="34" HorizontalContentAlignment="Stretch" > <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Center" Orientation="Horizontal" FlowDirection="RightToLeft" Height="30"> <Button Grid.Row="1" Content ="Apply" Padding="0,0,0,0 " Margin="6,2,0,2" Name="btn_Apply" HorizontalAlignment="Right" VerticalContentAlignment="Center" HorizontalContentAlignment="Center" Width="132" IsTabStop="True" Click="btn_ApplyClick" Height="24" /> </StackPanel > </Button> </StackPanel > </Grid>

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • python list mysteriously getting set to something within my django/piston handler

    - by Anverc
    To start, I'm very new to python, let alone Django and Piston. Anyway, I've created a new BaseHandler class "class BaseApiHandler(BaseHandler)" so that I can extend some of the stff that BaseHandler does. This has been working fine until I added a new filter that could limit results to the first or last result. Now I can refresh the api page over and over and sometimes it will limit the result even if I don't include /limit/whatever in my URL... I've added some debug info into my return value to see what is happening, and that's when it gets more weird. this return value will make more sense after you see the code, but here they are for reference: When the results are correct: "statusmsg": "2 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',}", when the results are incorrect (once you read the code you'll notice two things wrong. First, it doesn't have 'limit':'None', secondly it shouldn't even get this far to begin with. "statusmsg": "1 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',with limit[0,1](limit,None),}", It may be important to note that I'm the only person with access to the server running this right now, so even if it was a cache issue, it doesn't make sense that I can just refresh and get different results by hitting F5 while viewing: http://localhost/api/hours_detail/datestamp/2009-03-02/empid/22 Here's the code broken into urls.py and handlers.py so that you can see what i'm doing: URLS.PY urlpatterns = patterns('', #hours_detail/id/{id}/empid/{empid}/projid/{projid}/datestamp/{datestamp}/daterange/{fromdate}to{todate}/limit/{first|last}/exact #empid is required # id, empid, projid, datestamp, daterange can be in any order url(r'^api/hours_detail/(?:' + \ r'(?:[/]?id/(?P<id>\d+))?' + \ r'(?:[/]?empid/(?P<empid>\d+))?' + \ r'(?:[/]?projid/(?P<projid>\d+))?' + \ r'(?:[/]?datestamp/(?P<datestamp>\d{4,}[-/\.]\d{2,}[-/\.]\d{2,}))?' + \ r'(?:[/]?daterange/(?P<daterange>(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})(?:to|/-)(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})))?' + \ r')+' + \ r'(?:/limit/(?P<limit>(?:first|last)))?' + \ r'(?:/(?P<exact>exact))?$', hours_detail_resource), HANDLERS.PY # inherit from BaseHandler to add the extra functionality i need to process the possibly null URL params class BaseApiHandler(BaseHandler): # keep track of the handler so the data is represented back to me correctly post_name = 'base' # THIS IS THE LIST IN QUESTION - SOMETIMES IT IS GETTING SET TO [0,1] MYSTERIOUSLY # this gets set to a list when the results are to be limited limit = None def has_limit(self): return (isinstance(self.limit, list) and len(self.limit) == 2) def process_kwarg_read(self, key, value, d_post, b_exact): """ this should be overridden in the derived classes to process kwargs """ pass # override 'read' so we can better handle our api's searching capabilities def read(self, request, *args, **kwargs): d_post = {'status':0,'statusmsg':'Nothing Happened'} try: # setup the named response object # select all employees then filter - querysets are lazy in django # the actual query is only done once data is needed, so this may # seem like some memory hog slow beast, but it's actually not. d_post[self.post_name] = self.queryset(request) # this is a string that holds debug information... it's the string I mentioned before pasting this code s_query = '' b_exact = False if 'exact' in kwargs and kwargs['exact'] <> None: b_exact = True s_query = '\'exact\':True,' for key,value in kwargs.iteritems(): # the regex url possibilities will push None into the kwargs dictionary # if not specified, so just continue looping through if that's the case if value == None or key == 'exact': continue # write to the s_query string so we have a nice error message s_query = '%s\'%s\':\'%s\',' % (s_query, key, value) # now process this key/value kwarg self.process_kwarg_read(key=key, value=value, d_post=d_post, b_exact=b_exact) # end of the kwargs for loop else: if self.has_limit(): # THIS SEEMS TO GET HIT SOMETIMES IF YOU CONSTANTLY REFRESH THE API PAGE, EVEN THOUGH # THE LINE IN THE FOR LOOP WHICH UPDATES s_query DOESN'T GET HIS AND THUS self.process_kwarg_read ALSO # DOESN'T GET HIT SO NEITHER DOES limit = [0,1] s_query = '%swith limit[%s,%s](limit,%s),' % (s_query, self.limit[0], self.limit[1], kwargs['limit']) d_post[self.post_name] = d_post[self.post_name][self.limit[0]:self.limit[1]] if d_post[self.post_name].count() == 0: d_post['status'] = 0 d_post['statusmsg'] = '%s not found with query: {%s}' % (self.post_name, s_query) else: d_post['status'] = 1 d_post['statusmsg'] = '%s %s found with query: {%s}' % (d_post[self.post_name].count(), self.post_name, s_query) except: e = sys.exc_info()[1] d_post['status'] = 0 d_post['statusmsg'] = 'error: %s' % e d_post[self.post_name] = [] return d_post class HoursDetailHandler(BaseApiHandler): #allowed_methods = ('GET',) model = HoursDetail exclude = () post_name = 'hours_detail' def process_kwarg_read(self, key, value, d_post, b_exact): if ... # I have several if/elif statements here that check for other things... # 'self.limit =' only shows up in the following elif: elif key == 'limit': order_by = 'clock_time' if value == 'last': order_by = '-clock_time' d_post[self.post_name] = d_post[self.post_name].order_by(order_by) # TO GET HERE, THE ONLY PLACE IN CODE WHERE self.limit IS SET, YOU MUST HAVE GONE THROUGH # THE value == None CHECK???? self.limit = [0, 1] else: raise NameError def read(self, request, *args, **kwargs): # empid is required, so make sure it exists before running BaseApiHandler's read method if not('empid' in kwargs and kwargs['empid'] <> None and kwargs['empid'] >= 0): return {'status':0,'statusmsg':'empid cannot be empty'} else: return BaseApiHandler.read(self, request, *args, **kwargs) Does anyone have a clue how else self.limit might be getting set to [0, 1] ? Am I misunderstanding kwargs or loops or anything in Python?

    Read the article

  • strange behavior while including a class in php

    - by user1864539
    I'm experiencing a strange behavior with PHP. Basically I want to require a class within a PHP script. I know it is straight forward and I did it before but when I do so, it change the behavior of my jquery (1.8.3) ajax response. I'm running a wamp setup and my PHP version is 5.4.6. Here is a sample as for my index.html head (omitting the jquery js include) <script> $(document).ready(function(){ $('#submit').click(function(){ var action = $('#form').attr('action'); var form_data = { fname: $('#fname').val(), lname: $('#lname').val(), phone: $('#phone').val(), email: $('#email').val(), is_ajax: 1 }; $.ajax({ type: $('#form').attr('method'), url: action, data: form_data, success: function(response){ switch(response){ case 'ok': var msg = 'data saved'; break; case 'ko': var msg = 'Oops something wrong happen'; break; default: var msg = 'misc:<br/>'+response; break; } $('#message').html(msg); } }); return false; }); }); </script> body <div id="message"></div> <form id="form" action="handler.php" method="post"> <p> <input type="text" name="fname" id="fname" placeholder="fname"> <input type="text" name="lname" id="lname" placeholder="lname"> </p> <p> <input type="text" name="phone" id="phone" placeholder="phone"> <input type="text" name="email" id="email" placeholder="email"> </p> <input type="submit" name="submit" value="submit" id="submit"> </form> And as for the handler.php file: <?php require('class/Container.php'); $filename = 'xml/memory.xml'; $is_ajax = $_REQUEST['is_ajax']; if(isset($is_ajax) && $is_ajax){ $fname = $_REQUEST['fname']; $lname = $_REQUEST['lname']; $phone = $_REQUEST['phone']; $email = $_REQUEST['email']; $obj = new Container; $obj->insertData('fname',$fname); $obj->insertData('lname',$lname); $obj->insertData('phone',$phone); $obj->insertData('email',$email); $tmp = $obj->give(); $result = $tmp['_obj']; /* Push data inside array */ $array = array(); foreach($result as $key => $value){ array_push($array,$key,$value); } $xml = simplexml_load_file($filename); // check if there is any data in if(count($xml->elements->data) == 0){ // if not, create the structure $xml->elements->addChild('data',''); } // proceed now that we do have the structure if(count($xml->elements->data) == 1){ foreach($result as $key => $value){ $xml->elements->data->addChild($key,$value); } $xml->saveXML($filename); echo 'ok'; }else{ echo 'ko'; } } ? The Container class: <?php class Container{ private $_obj; public function __construct(){ $this->_obj = array(); } public function addData($data = array()){ if(!empty($data)){ $oldData = $this->_obj; $data = array_merge($oldData,$data); $this->_obj = $data; } } public function removeData($key){ if(!empty($key)){ $oldData = $this->_obj; unset($oldData[$key]); $this->_obj = $oldData; } } public function outputData(){ return $this->_obj; } public function give(){ return get_object_vars($this); } public function insertData($key,$value){ $this->_obj[$key] = $value; } } ? The strange thing is that my result always fall under the default switch statement and the ajax response fit both present statement. I noticed then if I just paste the Container class on the top of the handler.php file, everything works properly but it kind of defeat what I try to achieve. I tried different way to include the Container class but it seem to be than the issue is specific to this current scenario. I'm still learning PHP and my guess is that I'm missing something really basic. I also search on stackoverflow regarding the issue I'm experiencing as well as PHP.net, without success. Regards,

    Read the article

  • Min-Ordered Bionomial Heap Insertion java

    - by Charodd Richardson
    Im writing a java code to make a min-ordered Binomial Heap and I have to Insert and Remove-min. I'm having a very big problem inserting into the Heap. I have been stuck on this for a couple of days now and it is due tomorrow. Whenever I go to insert, It only prints out the item I insert instead of the whole tree (which is in preorder). Such as if I insert 1 it prints (1) and then I go to insert 2 it prints out (2) instead of (1(2)) It keeps printing out only the number I insert last instead of the whole preordered tree. I would be very grateful if someone could help me with this problem. Thank you so much in advance, Here is my code. public class BHeap { int key; int degree;//The degree(Number of children) BHeap parent, leftmostChild, rightmostChild, rightSibling,root,previous,next; public BHeap(){ key =0; degree=0; parent =null; leftmostChild=null; rightmostChild=null; rightSibling=null; root=null; previous=null; next=null; } public BHeap merge(BHeap x, BHeap y){ BHeap newHeap = new BHeap(); y.rightSibling=x.root; BHeap currentHeap = y; BHeap nextHeap = y.rightSibling; while(currentHeap.rightSibling !=null){ if(currentHeap.degree==nextHeap.degree){ if(currentHeap.key<nextHeap.key){ if(currentHeap.degree ==0){ currentHeap.leftmostChild=nextHeap; currentHeap.rightmostChild=nextHeap; currentHeap.rightSibling=nextHeap.rightSibling; nextHeap.rightSibling=null; nextHeap.parent=currentHeap; currentHeap.degree++; } else{ newHeap = currentHeap; newHeap.rightmostChild.rightSibling=nextHeap; newHeap.rightmostChild=nextHeap; nextHeap.parent=newHeap; newHeap.degree++; nextHeap.rightSibling=null; nextHeap=newHeap.rightSibling; } } else{ if(currentHeap.degree==0){ nextHeap.rightmostChild=currentHeap; nextHeap.rightmostChild.root = nextHeap.rightmostChild;//add nextHeap.leftmostChild=currentHeap; nextHeap.leftmostChild.root = nextHeap.leftmostChild;//add currentHeap.parent=nextHeap; currentHeap.rightSibling=null; currentHeap.root=currentHeap;//add nextHeap.degree++; } else{ newHeap=nextHeap; newHeap.rightmostChild.rightSibling=currentHeap; newHeap.rightmostChild=currentHeap; currentHeap.parent= newHeap; newHeap.degree++; currentHeap=newHeap.rightSibling; currentHeap.rightSibling=null; } } } else{ currentHeap=currentHeap.rightSibling; nextHeap=nextHeap.rightSibling; } } return y; } public void Insert(int x){ /*BHeap newHeap = new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; return; } else{ this.root=merge(newHeap,this.root); }*/ BHeap newHeap= new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; } else{ this.root = merge(this,newHeap); }} public void RemoveMin(){ BHeap newHeap = new BHeap(); BHeap child = new BHeap(); newHeap=this; BHeap pos = newHeap.next; while(pos !=null){ if(pos.key<newHeap.key){ newHeap=pos; } pos=pos.rightSibling; } pos=this; BHeap B1 = new BHeap(); if(newHeap.previous!=null){ newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } else{ newHeap=newHeap.rightSibling; newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } merge(newHeap,B1); } public void Display(){ System.out.print("("); System.out.print(this.root.key); if(this.leftmostChild != null){ this.leftmostChild.Display(); } System.out.print(")"); if(this.rightSibling!=null){ this.rightSibling.Display(); } } }

    Read the article

  • optimizing an sql query using inner join and order by

    - by Sergio B
    I'm trying to optimize the following query without success. Any idea where it could be indexed to prevent the temporary table and the filesort? EXPLAIN SELECT SQL_NO_CACHE `groups`.* FROM `groups` INNER JOIN `memberships` ON `groups`.id = `memberships`.group_id WHERE ((`memberships`.user_id = 1) AND (`memberships`.`status_code` = 1 AND `memberships`.`manager` = 0)) ORDER BY groups.created_at DESC LIMIT 5;` +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | memberships | ref | grp_usr,grp,usr,grp_mngr | usr | 5 | const | 5 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | groups | eq_ref | PRIMARY | PRIMARY | 4 | sportspool_development.memberships.group_id | 1 | | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ 2 rows in set (0.00 sec) +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | groups | 0 | PRIMARY | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_name | 1 | name | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_privacy_setting | 1 | privacy_setting | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_created_at | 1 | created_at | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 2 | created_at | A | 6 | NULL | NULL | YES | BTREE | | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | memberships | 0 | PRIMARY | 1 | id | A | 2 | NULL | NULL | | BTREE | | | memberships | 0 | grp_usr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 0 | grp_usr | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | usr | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 2 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 3 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 4 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 2 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 3 | manager | A | 2 | NULL | NULL | YES | BTREE | | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+

    Read the article

  • Problem building STLport NDK r5/ Android

    - by user558299
    Hi all, I'm trying to build STLport for Android. I got the following steps, but they are not working: 1 - Clone STLport repository using: git clone git://stlport.git.sourceforge.net/gitroot/stlport/stlport 2 - Configure environment using : ./configure --target=arm-eabi --with-extra-cxxflags="-fshort-enums" --with-extra-cflags="-fshort-enums" 3 - From src directory build it using make SYSROOT"{MY NDK path}/platforms/android-5/arch-arm/" release-static But I got the following errors: In file included from ../stlport/stl/_alloc.h:45, from ../stlport/memory:29, from dll_main.cpp:41: ../stlport/stl/_new.h:45:24: error: new: No such file or directory In file included from ../stlport/stl/_limits.h:36, from ../stlport/limits:29, from dll_main.cpp:48: ../stlport/stl/_cwchar.h:26:30: error: cstddef: No such file or directory In file included from ../stlport/stl/_utility.h:35, from ../stlport/utility:35, from dll_main.cpp:40: ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:889: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:889: error: template declaration of 'int std::tr1::detail::decltype' ../stlport/type_traits:942: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:942: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:942: error: template declaration of 'int std::tr1::detail::decltype' make: *** [obj/arm-eabi-gcc/so/dll_main.o] Error 1 Is there any include dir or configuration I´m missing? Thanks, Sergio

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • MySQL DDL error creating tables

    - by Alexandstein
    I am attempting to create tables for a MySQL database, but I am having some syntactical issues. It would seem that syntax checking is behaving differently between tables for some reason. While I've gotten all the other tables to go through, the table, 'stock' doesn't seem to be working, despite seeming to use the same syntax patterns. CREATE TABLE users ( user_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, username VARCHAR(30) NOT NULL, password CHAR(41) NOT NULL, date_joined DATETIME NOT NULL, funds DOUBLE UNSIGNED NOT NULL, PRIMARY KEY(user_id), UNIQUE KEY(username) ); CREATE TABLE owned_stocks ( id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, user_id SMALLINT UNSIGNED NOT NULL, paid_price DOUBLE UNSIGNED NOT NULL, quantity MEDIUMINT UNSIGNED NOT NULL, purchase_date DATETIME NOT NULL, PRIMARY KEY(id) ); CREATE TABLE tracking_stocks ( ticker VARCHAR(5) NOT NULL, user_id SMALLINT UNSIGNED NOT NULL, PRIMARY KEY(ticker) ); CREATE TABLE stocks ( ticker VARCHAR(5) NOT NULL, last DOUBLE UNSIGNED NOT NULL, high DOUBLE UNSIGNED NOT NULL, low DOUBLE UNSIGNED NOT NULL, company_name VARCHAR(30) NOT NULL, last_updated INT UNSIGNED NOT NULL, change DOUBLE NOT NULL, percent_change DOUBLE NOT NULL, PRIMARY KEY(ticker) ); Am I just missing a really obvious syntactical issue? ERROR: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'change DOUBLE NOT NULL, percent_change DOUBLE NOT NULL, last DOUBLE' at line 4

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • MySQL query does not return any data

    - by Alex L
    Hi, I need to retrieve data from a specific time period. The query works fine until I specify the time period. Is there something wrong with the way I specify time period? I know there are many entries within that time-frame. This query returns empty: SELECT stop_times.stop_id, STR_TO_DATE(stop_times.arrival_time, '%H:%i:%s') as stopTime, routes.route_short_name, routes.route_long_name, trips.trip_headsign FROM trips JOIN stop_times ON trips.trip_id = stop_times.trip_id JOIN routes ON routes.route_id = trips.route_id WHERE stop_times.stop_id = 5508 HAVING stopTime BETWEEN DATE_SUB(stopTime,INTERVAL 1 MINUTE) AND DATE_ADD(stopTime,INTERVAL 20 MINUTE); Here is it's EXPLAIN: +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | 1 | SIMPLE | stop_times | ref | trip_id,stop_id | stop_id | 5 | const | 605 | Using where | | 1 | SIMPLE | trips | eq_ref | PRIMARY,route_id | PRIMARY | 4 | wmata_gtfs.stop_times.trip_id | 1 | | | 1 | SIMPLE | routes | eq_ref | PRIMARY | PRIMARY | 4 | wmata_gtfs.trips.route_id | 1 | | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ 3 rows in set (0.00 sec) The query works if I remove the HAVING clause (don't specify time range). Returns: +---------+----------+------------------+-----------------+---------------+ | stop_id | stopTime | route_short_name | route_long_name | trip_headsign | +---------+----------+------------------+-----------------+---------------+ | 5508 | 06:31:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 06:57:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:23:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:49:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:15:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:41:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 09:08:00 | "80" | "" | "FORT TOTTEN" | I am using Google Transit format Data loaded into MySQL. The query is supposed to provide stop times and bus routes for a given bus stop. For a bus stop, I am trying to get: Route Name Bus Name Bus Direction (headsign) Stop time The results should be limited only to buses times from 1 min ago to 20 min from now. Please let me know if you could help.

    Read the article

  • Using Hibernate with MS ACCESS 2007 Database (Free JDBC Driver)

    - by Quentin T.
    1. I want to do a reverse engineering action with the Hibernate plugin of Eclipse on a MS Access 2007 Database. I'm forced to use a existing MS Access 2007 db. A easy solution is to buy the HXTT. But I want to use a free driver to do my work. So I tried to apply this post : http://www.programmingforfuture.com/2011/06/how-to-use-ms-access-with-hibernate.html (That uses the SQL Server dialect and the driver sun.jdbc.odbc.JdbcOdbcDriver) Unfortunately I have an error that nobody seems to have been on the internet: Exception while generating code Reason : org.hibernate.exception.GenericJDBCException: Error while reading primary key meta data for `c:/myaccessdb.mdb`.TableTest1 I have try to change the primary key on my MS Access DB (deleting all primary key) or to try the reverse engineering on a MS ACCESS with only one table without primary key, but I got all times the problems. 2. The purpose of my job is to transfer daily (weekly) an Oracle 11g database with data from an existing database MS ACCESS 2007. And I thought to use a procedure (Hibernate EJB) Java to be launched automatically every week to do the data transfer. Is this is the best solution ? Configuration : sun.jdbc.odbc.JdbcOdbcDriver v??? Hibernate v3.4 Eclipse ps: If you are a HXTT developer or seller please be indulgent with my post ;). Making money by making people believe that you help, it's bad ! A solution is to use Derby Client driver, as the solution in the post: Does anyone know if Hibernate and java will work effectively with Access? But a clarification of the answer of Rich Seller is required. Could you explain your answer and explain your configuration (hibernate.cfg.xml, persistence.xml and what URL you use in the property name="hibernate.connection.url") without using paying HXTT driver but with the free Derby driver.

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • Do I need to write a trigger for such a simple constraint?

    - by Paul Hanbury
    I really had a hard time knowing what words to put into the title of my question, as I am not especially sure if there is a database pattern related to my problem. I will try to simplify matters as much as possible to get directly to the heart of the issue. Suppose I have some tables. The first one is a list of widget types: create table widget_types ( widget_type_id number(7,0) primary key, description varchar2(50) ); The next one contains icons: create table icons ( icon_id number(7,0) primary key, picture blob ); Even though the users get to select their preferred widget, there is a predefined subset of widgets that they can choose from for each widget type. create table icon_associations ( widget_type_id number(7,0) references widget_types, icon_id number(7,0) references icons, primary key (widget_type_id, icon_id) ); create table icon_prefs ( user_id number(7,0) references users, widget_type_id number(7,0), icon_id number(7,0), primary key (user_id, widget_type_id), foreign key (widget_type_id, icon_id) references icon_associations ); Pretty simple so far. Let us now assume that if we are displaying an icon to a user who has not set up his preferences, we choose one of the appropriate images associated with the current widget. I'd like to specify the preferred icon to display in such a case, and here's where I run into my problem: alter table icon_associations add ( is_preferred char(1) check( is_preferred in ('y','n') ) ) ; I do not see how I can enforce that for each widget_type there is one, and only one, row having is_preferred set to 'y'. I know that in MySQL, I am able to write a subquery in my check constraint to quickly resolve this issue. This is not possible with Oracle. Is my mistake that this column has no business being in the icon_associations table? If not where should it go? Is this a case where, in Oracle, the constraint can only be handled with a trigger? I ask only because I'd like to go the constraint route if at all possible. Thanks so much for your help, Paul

    Read the article

  • How to Map Two Tables To One Class in Fluent NHibernate

    - by Richard Nagle
    I am having a problem with fluent nhiberbate mapping two tables to one class. I have the following database schema: TABLE dbo.LocationName ( LocationId INT PRIMARY KEY, LanguageId INT PRIMARY KEY, Name VARCHAR(200) ) TABLE dbo.Language ( LanguageId INT PRIMARY KEY, Locale CHAR(5) ) And want to build the following class definition: public class LocationName { public virtual int LocationId { get; private set; } public virtual int LanguageId { get; private set; } public virtual string Name { get; set; } public virtual string Locale { get; set; } } Here is my mapping class: public LocalisedNameMap() { WithTable("LocationName"); UseCompositeId() .WithKeyProperty(x => x.LanguageId) .WithKeyProperty(x => x.LocationId); Map(x => x.Name); WithTable("Language", lang => { lang.WithKeyColumn("LanguageId"); lang.Map(x => x.Locale); }); } The problem is with the mapping of the Locale field being from another table, and in particular that the keys between those tables don't match. Whenever I run the application with this mapping I get the following error on startup: Foreign key (FK7FC009CCEEA10EEE:Language [LanguageId])) must have same number of columns as the referenced primary key (LocationName [LanguageId, LocationId]) How do I tell nHibernate to map from LocationName to Language using only the LanguageId field?

    Read the article

  • PHP: parse $_FILES[] data in multidimesional array

    - by superUntitled
    I having been looking around for an answer to this and have not found an answer anywhere, I am hoping someone has done this before! I have a form that allows for dynamic duplication of the form fields. The form allows for file uploads and text input, so the data is sent in both $_POST and $_FILES arrays. The the initial set of inputs look like this: <input type="text" name="primary[1][text]" /> <input type="file" name="primary[1][file]" /> <input type="text" class="a" name="secondary[1][text][]" /> <input type="file" name="secondary[1][file][]" /> When duplicated the fields are incremented, they look like this: <input type="text" name="primary[2][text]" /> <input type="file" name="primary[2][file]" /> <input type="text" class="a" name="secondary[2][text][]" /> <input type="file" name="secondary[2][file][]" /> To complicate matters, the "secondary" form fields can also be duplicated (thus the [] at the end of the secondary name array. How can I parse the posted $_FILES array? I have tried something like this: foreach ($_FILES['question'] as $f_num) { echo $f['file']['name']; } but I get an "Undefined index: file... " error.

    Read the article

  • Social Networking & Network Affiliations

    - by Code Sherpa
    Hi. I am in the process of planning a database for a social networking project and stumbled upon this url which is a (crude) reverse engineered guess at facebook's schema: http://www.flickr.com/photos/ikhnaton2/533233247/ What is of interest to me is the notion of "Affiliations" and I am trying to fully understand how they work, technically speaking. Where I am somewhat confused is the NetworkID column in the FacebookGroups", "FacebookEvent", and "Affiliations" tables (NID in Affiliations). How are these network affiliations interconnected? In my own project, I have a simple profile table: CREATE TABLE [dbo].[Profiles]( [profileid] [int] IDENTITY(1,1) NOT NULL, [userid] [uniqueidentifier] NOT NULL, [username] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [applicationname] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [isanonymous] [bit] NULL, [lastactivity] [datetime] NULL, [lastupdated] [datetime] NULL, CONSTRAINT [PK__Profiles__1DB06A4F] PRIMARY KEY CLUSTERED ( [profileid] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY], CONSTRAINT [PKProfiles] UNIQUE NONCLUSTERED ( [username] ASC, [applicationname] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] One profile can have many affiliations. And one affiliation can have many profiles. And I would like to design it in such a way that relationships between affiliations tells me something about the associated profiles. In fact, based on the affiliations that users select, I would like to know how to infer as many things as possible about that person. My question is, how should I be designing my network affiliation tables and how do they operate per my above requirements? A rough SQL schema would be appreciated in your response. Thanks in advance...

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >